00:00:00.000 Started by upstream project "autotest-nightly" build number 3881 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3261 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.130 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.062 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.075 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.087 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.087 > git config core.sparsecheckout # timeout=10 00:00:04.098 > git read-tree -mu HEAD # timeout=10 00:00:04.115 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.136 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.137 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.222 [Pipeline] Start of Pipeline 00:00:04.239 [Pipeline] library 00:00:04.241 Loading library shm_lib@master 00:00:04.241 Library shm_lib@master is cached. Copying from home. 00:00:04.257 [Pipeline] node 00:00:19.260 Still waiting to schedule task 00:00:19.260 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.260 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘GP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘VM-host-WFP1’ is offline 00:00:19.261 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP23’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP27’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP41’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP42’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP46’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘ipxe-staging’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.261 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:01:02.911 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:02.913 [Pipeline] { 00:01:02.925 [Pipeline] catchError 00:01:02.926 [Pipeline] { 00:01:02.942 [Pipeline] wrap 00:01:02.954 [Pipeline] { 00:01:02.963 [Pipeline] stage 00:01:02.965 [Pipeline] { (Prologue) 00:01:02.990 [Pipeline] echo 00:01:02.992 Node: VM-host-SM0 00:01:02.999 [Pipeline] cleanWs 00:01:03.008 [WS-CLEANUP] Deleting project workspace... 00:01:03.008 [WS-CLEANUP] Deferred wipeout is used... 00:01:03.013 [WS-CLEANUP] done 00:01:03.341 [Pipeline] setCustomBuildProperty 00:01:03.413 [Pipeline] httpRequest 00:01:03.434 [Pipeline] echo 00:01:03.435 Sorcerer 10.211.164.101 is alive 00:01:03.443 [Pipeline] httpRequest 00:01:03.447 HttpMethod: GET 00:01:03.447 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:01:03.447 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:01:03.448 Response Code: HTTP/1.1 200 OK 00:01:03.449 Success: Status code 200 is in the accepted range: 200,404 00:01:03.449 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:01:03.785 [Pipeline] sh 00:01:04.135 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:01:04.151 [Pipeline] httpRequest 00:01:04.168 [Pipeline] echo 00:01:04.169 Sorcerer 10.211.164.101 is alive 00:01:04.178 [Pipeline] httpRequest 00:01:04.182 HttpMethod: GET 00:01:04.182 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:04.183 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:04.183 Response Code: HTTP/1.1 200 OK 00:01:04.183 Success: Status code 200 is in the accepted range: 200,404 00:01:04.184 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:08.345 [Pipeline] sh 00:01:08.621 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:11.913 [Pipeline] sh 00:01:12.192 + git -C spdk log --oneline -n5 00:01:12.192 719d03c6a sock/uring: only register net impl if supported 00:01:12.192 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:12.192 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:12.192 6c7c1f57e accel: add sequence outstanding stat 00:01:12.192 3bc8e6a26 accel: add utility to put task 00:01:12.213 [Pipeline] writeFile 00:01:12.235 [Pipeline] sh 00:01:12.519 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.529 [Pipeline] sh 00:01:12.807 + cat autorun-spdk.conf 00:01:12.807 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.807 SPDK_TEST_NVMF=1 00:01:12.807 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.807 SPDK_TEST_VFIOUSER=1 00:01:12.807 SPDK_TEST_USDT=1 00:01:12.807 SPDK_RUN_ASAN=1 00:01:12.807 SPDK_RUN_UBSAN=1 00:01:12.807 SPDK_TEST_NVMF_MDNS=1 00:01:12.807 NET_TYPE=virt 00:01:12.807 SPDK_JSONRPC_GO_CLIENT=1 00:01:12.807 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.813 RUN_NIGHTLY=1 00:01:12.816 [Pipeline] } 00:01:12.830 [Pipeline] // stage 00:01:12.844 [Pipeline] stage 00:01:12.846 [Pipeline] { (Run VM) 00:01:12.859 [Pipeline] sh 00:01:13.147 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.147 + echo 'Start stage prepare_nvme.sh' 00:01:13.147 Start stage prepare_nvme.sh 00:01:13.147 + [[ -n 5 ]] 00:01:13.147 + disk_prefix=ex5 00:01:13.147 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:13.148 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:13.148 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:13.148 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.148 ++ SPDK_TEST_NVMF=1 00:01:13.148 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.148 ++ SPDK_TEST_VFIOUSER=1 00:01:13.148 ++ SPDK_TEST_USDT=1 00:01:13.148 ++ SPDK_RUN_ASAN=1 00:01:13.148 ++ SPDK_RUN_UBSAN=1 00:01:13.148 ++ SPDK_TEST_NVMF_MDNS=1 00:01:13.148 ++ NET_TYPE=virt 00:01:13.148 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:13.148 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.148 ++ RUN_NIGHTLY=1 00:01:13.148 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:13.148 + nvme_files=() 00:01:13.148 + declare -A nvme_files 00:01:13.148 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.148 + nvme_files['nvme.img']=5G 00:01:13.148 + nvme_files['nvme-cmb.img']=5G 00:01:13.148 + nvme_files['nvme-multi0.img']=4G 00:01:13.148 + nvme_files['nvme-multi1.img']=4G 00:01:13.148 + nvme_files['nvme-multi2.img']=4G 00:01:13.148 + nvme_files['nvme-openstack.img']=8G 00:01:13.148 + nvme_files['nvme-zns.img']=5G 00:01:13.148 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.148 + (( SPDK_TEST_FTL == 1 )) 00:01:13.148 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.148 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.148 + for nvme in "${!nvme_files[@]}" 00:01:13.148 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:13.148 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.148 + for nvme in "${!nvme_files[@]}" 00:01:13.148 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:13.714 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.714 + for nvme in "${!nvme_files[@]}" 00:01:13.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:13.714 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.714 + for nvme in "${!nvme_files[@]}" 00:01:13.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:13.714 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.714 + for nvme in "${!nvme_files[@]}" 00:01:13.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:13.973 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.973 + for nvme in "${!nvme_files[@]}" 00:01:13.973 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:13.973 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.973 + for nvme in "${!nvme_files[@]}" 00:01:13.973 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:14.539 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.539 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:14.539 + echo 'End stage prepare_nvme.sh' 00:01:14.539 End stage prepare_nvme.sh 00:01:14.551 [Pipeline] sh 00:01:14.830 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.830 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:14.830 00:01:14.830 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:14.830 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:14.830 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.830 HELP=0 00:01:14.830 DRY_RUN=0 00:01:14.830 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:14.830 NVME_DISKS_TYPE=nvme,nvme, 00:01:14.830 NVME_AUTO_CREATE=0 00:01:14.830 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:14.830 NVME_CMB=,, 00:01:14.830 NVME_PMR=,, 00:01:14.830 NVME_ZNS=,, 00:01:14.830 NVME_MS=,, 00:01:14.830 NVME_FDP=,, 00:01:14.830 SPDK_VAGRANT_DISTRO=fedora38 00:01:14.830 SPDK_VAGRANT_VMCPU=10 00:01:14.830 SPDK_VAGRANT_VMRAM=12288 00:01:14.830 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.830 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.830 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.830 SPDK_OPENSTACK_NETWORK=0 00:01:14.830 VAGRANT_PACKAGE_BOX=0 00:01:14.830 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.830 FORCE_DISTRO=true 00:01:14.830 VAGRANT_BOX_VERSION= 00:01:14.830 EXTRA_VAGRANTFILES= 00:01:14.830 NIC_MODEL=e1000 00:01:14.830 00:01:14.830 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:14.830 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:19.012 Bringing machine 'default' up with 'libvirt' provider... 00:01:19.270 ==> default: Creating image (snapshot of base box volume). 00:01:19.528 ==> default: Creating domain with the following settings... 00:01:19.528 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720743623_cf2e661b98138bd8fa97 00:01:19.528 ==> default: -- Domain type: kvm 00:01:19.528 ==> default: -- Cpus: 10 00:01:19.528 ==> default: -- Feature: acpi 00:01:19.528 ==> default: -- Feature: apic 00:01:19.528 ==> default: -- Feature: pae 00:01:19.528 ==> default: -- Memory: 12288M 00:01:19.528 ==> default: -- Memory Backing: hugepages: 00:01:19.528 ==> default: -- Management MAC: 00:01:19.528 ==> default: -- Loader: 00:01:19.528 ==> default: -- Nvram: 00:01:19.528 ==> default: -- Base box: spdk/fedora38 00:01:19.528 ==> default: -- Storage pool: default 00:01:19.528 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720743623_cf2e661b98138bd8fa97.img (20G) 00:01:19.528 ==> default: -- Volume Cache: default 00:01:19.528 ==> default: -- Kernel: 00:01:19.528 ==> default: -- Initrd: 00:01:19.528 ==> default: -- Graphics Type: vnc 00:01:19.528 ==> default: -- Graphics Port: -1 00:01:19.528 ==> default: -- Graphics IP: 127.0.0.1 00:01:19.528 ==> default: -- Graphics Password: Not defined 00:01:19.528 ==> default: -- Video Type: cirrus 00:01:19.528 ==> default: -- Video VRAM: 9216 00:01:19.528 ==> default: -- Sound Type: 00:01:19.528 ==> default: -- Keymap: en-us 00:01:19.528 ==> default: -- TPM Path: 00:01:19.528 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:19.528 ==> default: -- Command line args: 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:19.528 ==> default: -> value=-drive, 00:01:19.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:19.528 ==> default: -> value=-drive, 00:01:19.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.528 ==> default: -> value=-drive, 00:01:19.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.528 ==> default: -> value=-drive, 00:01:19.528 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:19.528 ==> default: -> value=-device, 00:01:19.528 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:19.785 ==> default: Creating shared folders metadata... 00:01:19.785 ==> default: Starting domain. 00:01:22.348 ==> default: Waiting for domain to get an IP address... 00:01:37.215 ==> default: Waiting for SSH to become available... 00:01:39.117 ==> default: Configuring and enabling network interfaces... 00:01:43.304 default: SSH address: 192.168.121.131:22 00:01:43.304 default: SSH username: vagrant 00:01:43.304 default: SSH auth method: private key 00:01:45.205 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.314 ==> default: Mounting SSHFS shared folder... 00:01:54.690 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:54.690 ==> default: Checking Mount.. 00:01:55.630 ==> default: Folder Successfully Mounted! 00:01:55.630 ==> default: Running provisioner: file... 00:01:56.564 default: ~/.gitconfig => .gitconfig 00:01:56.822 00:01:56.822 SUCCESS! 00:01:56.822 00:01:56.822 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:56.822 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.822 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:56.822 00:01:56.830 [Pipeline] } 00:01:56.844 [Pipeline] // stage 00:01:56.852 [Pipeline] dir 00:01:56.852 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:56.853 [Pipeline] { 00:01:56.861 [Pipeline] catchError 00:01:56.862 [Pipeline] { 00:01:56.874 [Pipeline] sh 00:01:57.148 + vagrant ssh-config --host vagrant 00:01:57.148 + sed -ne /^Host/,$p 00:01:57.148 + tee ssh_conf 00:02:01.332 Host vagrant 00:02:01.332 HostName 192.168.121.131 00:02:01.332 User vagrant 00:02:01.332 Port 22 00:02:01.333 UserKnownHostsFile /dev/null 00:02:01.333 StrictHostKeyChecking no 00:02:01.333 PasswordAuthentication no 00:02:01.333 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:01.333 IdentitiesOnly yes 00:02:01.333 LogLevel FATAL 00:02:01.333 ForwardAgent yes 00:02:01.333 ForwardX11 yes 00:02:01.333 00:02:01.350 [Pipeline] withEnv 00:02:01.353 [Pipeline] { 00:02:01.371 [Pipeline] sh 00:02:01.706 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.706 source /etc/os-release 00:02:01.706 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.706 # Minimal, systemd-like check. 00:02:01.706 if [[ -e /.dockerenv ]]; then 00:02:01.706 # Clear garbage from the node's name: 00:02:01.706 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.706 # $HOSTNAME is the actual container id 00:02:01.706 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.706 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.706 # We can assume this is a mount from a host where container is running, 00:02:01.706 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.706 container="$(< /etc/hostname) ($agent)" 00:02:01.706 else 00:02:01.706 # Fallback 00:02:01.706 container=$agent 00:02:01.706 fi 00:02:01.706 fi 00:02:01.706 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.706 00:02:01.717 [Pipeline] } 00:02:01.737 [Pipeline] // withEnv 00:02:01.746 [Pipeline] setCustomBuildProperty 00:02:01.763 [Pipeline] stage 00:02:01.765 [Pipeline] { (Tests) 00:02:01.783 [Pipeline] sh 00:02:02.055 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:02.328 [Pipeline] sh 00:02:02.606 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.879 [Pipeline] timeout 00:02:02.879 Timeout set to expire in 40 min 00:02:02.882 [Pipeline] { 00:02:02.900 [Pipeline] sh 00:02:03.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.747 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:03.784 [Pipeline] sh 00:02:04.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:04.337 [Pipeline] sh 00:02:04.701 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.974 [Pipeline] sh 00:02:05.246 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:05.246 ++ readlink -f spdk_repo 00:02:05.502 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:05.502 + [[ -n /home/vagrant/spdk_repo ]] 00:02:05.502 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:05.502 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:05.502 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:05.502 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:05.502 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:05.502 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:05.502 + cd /home/vagrant/spdk_repo 00:02:05.502 + source /etc/os-release 00:02:05.502 ++ NAME='Fedora Linux' 00:02:05.502 ++ VERSION='38 (Cloud Edition)' 00:02:05.502 ++ ID=fedora 00:02:05.502 ++ VERSION_ID=38 00:02:05.502 ++ VERSION_CODENAME= 00:02:05.502 ++ PLATFORM_ID=platform:f38 00:02:05.502 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:05.502 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.502 ++ LOGO=fedora-logo-icon 00:02:05.502 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:05.502 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.502 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:05.502 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.502 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.502 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.502 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:05.502 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.502 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:05.502 ++ SUPPORT_END=2024-05-14 00:02:05.502 ++ VARIANT='Cloud Edition' 00:02:05.502 ++ VARIANT_ID=cloud 00:02:05.502 + uname -a 00:02:05.502 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:05.502 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:05.759 Hugepages 00:02:05.759 node hugesize free / total 00:02:05.759 node0 1048576kB 0 / 0 00:02:05.759 node0 2048kB 0 / 0 00:02:05.759 00:02:05.759 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.759 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.017 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.017 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:06.017 + rm -f /tmp/spdk-ld-path 00:02:06.017 + source autorun-spdk.conf 00:02:06.017 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.017 ++ SPDK_TEST_NVMF=1 00:02:06.017 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.017 ++ SPDK_TEST_VFIOUSER=1 00:02:06.017 ++ SPDK_TEST_USDT=1 00:02:06.017 ++ SPDK_RUN_ASAN=1 00:02:06.017 ++ SPDK_RUN_UBSAN=1 00:02:06.017 ++ SPDK_TEST_NVMF_MDNS=1 00:02:06.017 ++ NET_TYPE=virt 00:02:06.017 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:06.017 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.017 ++ RUN_NIGHTLY=1 00:02:06.017 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.017 + [[ -n '' ]] 00:02:06.017 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.017 + for M in /var/spdk/build-*-manifest.txt 00:02:06.017 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.017 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.017 + for M in /var/spdk/build-*-manifest.txt 00:02:06.017 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.017 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.017 ++ uname 00:02:06.017 + [[ Linux == \L\i\n\u\x ]] 00:02:06.017 + sudo dmesg -T 00:02:06.017 + sudo dmesg --clear 00:02:06.017 + dmesg_pid=5168 00:02:06.017 + sudo dmesg -Tw 00:02:06.017 + [[ Fedora Linux == FreeBSD ]] 00:02:06.017 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.017 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.017 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.017 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.017 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.017 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.017 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.017 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.017 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.017 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.017 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.017 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.017 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.017 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.017 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.017 Test configuration: 00:02:06.017 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.017 SPDK_TEST_NVMF=1 00:02:06.017 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.017 SPDK_TEST_VFIOUSER=1 00:02:06.017 SPDK_TEST_USDT=1 00:02:06.017 SPDK_RUN_ASAN=1 00:02:06.017 SPDK_RUN_UBSAN=1 00:02:06.017 SPDK_TEST_NVMF_MDNS=1 00:02:06.017 NET_TYPE=virt 00:02:06.017 SPDK_JSONRPC_GO_CLIENT=1 00:02:06.017 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.017 RUN_NIGHTLY=1 00:21:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:06.017 00:21:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.017 00:21:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.017 00:21:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.017 00:21:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.017 00:21:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.017 00:21:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.017 00:21:10 -- paths/export.sh@5 -- $ export PATH 00:02:06.017 00:21:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.017 00:21:10 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:06.017 00:21:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:06.017 00:21:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720743670.XXXXXX 00:02:06.017 00:21:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720743670.G8kRYO 00:02:06.017 00:21:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:06.017 00:21:10 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:06.017 00:21:10 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:06.017 00:21:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:06.017 00:21:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.276 00:21:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:06.276 00:21:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:06.276 00:21:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.276 00:21:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:06.276 00:21:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:06.276 00:21:10 -- pm/common@17 -- $ local monitor 00:02:06.276 00:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 00:21:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 00:21:10 -- pm/common@21 -- $ date +%s 00:02:06.276 00:21:10 -- pm/common@25 -- $ sleep 1 00:02:06.276 00:21:10 -- pm/common@21 -- $ date +%s 00:02:06.276 00:21:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720743670 00:02:06.276 00:21:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720743670 00:02:06.276 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720743670_collect-vmstat.pm.log 00:02:06.276 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720743670_collect-cpu-load.pm.log 00:02:07.264 00:21:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:07.264 00:21:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.264 00:21:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.264 00:21:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:07.264 00:21:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.264 Fri Jul 12 12:21:11 AM UTC 2024 00:02:07.264 00:21:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.264 v24.09-pre-202-g719d03c6a 00:02:07.264 00:21:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:07.264 00:21:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:07.264 00:21:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:07.264 00:21:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.264 00:21:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.264 ************************************ 00:02:07.264 START TEST asan 00:02:07.264 ************************************ 00:02:07.264 using asan 00:02:07.264 00:21:12 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:07.264 00:02:07.264 real 0m0.000s 00:02:07.264 user 0m0.000s 00:02:07.264 sys 0m0.000s 00:02:07.264 00:21:12 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:07.264 00:21:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.264 ************************************ 00:02:07.264 END TEST asan 00:02:07.264 ************************************ 00:02:07.264 00:21:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:07.264 00:21:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.264 00:21:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.264 00:21:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:07.264 00:21:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.264 00:21:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.264 ************************************ 00:02:07.264 START TEST ubsan 00:02:07.264 ************************************ 00:02:07.264 using ubsan 00:02:07.264 00:21:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:07.264 00:02:07.264 real 0m0.000s 00:02:07.264 user 0m0.000s 00:02:07.264 sys 0m0.000s 00:02:07.264 00:21:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:07.264 00:21:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.264 ************************************ 00:02:07.264 END TEST ubsan 00:02:07.264 ************************************ 00:02:07.264 00:21:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:07.264 00:21:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:07.264 00:21:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.264 00:21:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:07.264 00:21:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:07.521 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:07.521 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.780 Using 'verbs' RDMA provider 00:02:24.042 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:34.080 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.080 go version go1.21.1 linux/amd64 00:02:34.645 Creating mk/config.mk...done. 00:02:34.645 Creating mk/cc.flags.mk...done. 00:02:34.645 Type 'make' to build. 00:02:34.645 00:21:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:34.645 00:21:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:34.645 00:21:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:34.645 00:21:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.645 ************************************ 00:02:34.645 START TEST make 00:02:34.645 ************************************ 00:02:34.645 00:21:39 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:34.904 make[1]: Nothing to be done for 'all'. 00:02:36.283 The Meson build system 00:02:36.283 Version: 1.3.1 00:02:36.284 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:36.284 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:36.284 Build type: native build 00:02:36.284 Project name: libvfio-user 00:02:36.284 Project version: 0.0.1 00:02:36.284 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:36.284 C linker for the host machine: cc ld.bfd 2.39-16 00:02:36.284 Host machine cpu family: x86_64 00:02:36.284 Host machine cpu: x86_64 00:02:36.284 Run-time dependency threads found: YES 00:02:36.284 Library dl found: YES 00:02:36.284 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:36.284 Run-time dependency json-c found: YES 0.17 00:02:36.284 Run-time dependency cmocka found: YES 1.1.7 00:02:36.284 Program pytest-3 found: NO 00:02:36.284 Program flake8 found: NO 00:02:36.284 Program misspell-fixer found: NO 00:02:36.284 Program restructuredtext-lint found: NO 00:02:36.284 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.284 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.284 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.284 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.284 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.284 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.284 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.284 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.284 Build targets in project: 8 00:02:36.284 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.284 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.284 00:02:36.284 libvfio-user 0.0.1 00:02:36.284 00:02:36.284 User defined options 00:02:36.284 buildtype : debug 00:02:36.284 default_library: shared 00:02:36.284 libdir : /usr/local/lib 00:02:36.284 00:02:36.284 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.848 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:37.105 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:37.105 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:37.105 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:37.105 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:37.105 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:37.105 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:37.105 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:37.105 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:37.105 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:37.105 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:37.105 [11/37] Compiling C object samples/client.p/client.c.o 00:02:37.362 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:37.362 [13/37] Compiling C object samples/null.p/null.c.o 00:02:37.362 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:37.362 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:37.362 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:37.362 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:37.362 [18/37] Linking target samples/client 00:02:37.362 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:37.362 [20/37] Compiling C object samples/server.p/server.c.o 00:02:37.362 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:37.362 [22/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.362 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:37.362 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:37.362 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:37.362 [26/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:37.362 [27/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:37.619 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:37.619 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:37.619 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.619 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.619 [32/37] Linking target samples/server 00:02:37.619 [33/37] Linking target test/unit_tests 00:02:37.619 [34/37] Linking target samples/lspci 00:02:37.619 [35/37] Linking target samples/null 00:02:37.619 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:37.619 [37/37] Linking target samples/gpio-pci-idio-16 00:02:37.877 INFO: autodetecting backend as ninja 00:02:37.877 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:37.877 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.443 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:38.443 ninja: no work to do. 00:02:50.745 The Meson build system 00:02:50.745 Version: 1.3.1 00:02:50.745 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:50.745 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.745 Build type: native build 00:02:50.745 Program cat found: YES (/usr/bin/cat) 00:02:50.745 Project name: DPDK 00:02:50.745 Project version: 24.03.0 00:02:50.745 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:50.745 C linker for the host machine: cc ld.bfd 2.39-16 00:02:50.745 Host machine cpu family: x86_64 00:02:50.745 Host machine cpu: x86_64 00:02:50.745 Message: ## Building in Developer Mode ## 00:02:50.745 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.745 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.745 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.745 Program python3 found: YES (/usr/bin/python3) 00:02:50.745 Program cat found: YES (/usr/bin/cat) 00:02:50.745 Compiler for C supports arguments -march=native: YES 00:02:50.745 Checking for size of "void *" : 8 00:02:50.745 Checking for size of "void *" : 8 (cached) 00:02:50.745 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:50.745 Library m found: YES 00:02:50.745 Library numa found: YES 00:02:50.745 Has header "numaif.h" : YES 00:02:50.745 Library fdt found: NO 00:02:50.745 Library execinfo found: NO 00:02:50.745 Has header "execinfo.h" : YES 00:02:50.745 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:50.745 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.745 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.745 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.745 Run-time dependency openssl found: YES 3.0.9 00:02:50.745 Run-time dependency libpcap found: YES 1.10.4 00:02:50.745 Has header "pcap.h" with dependency libpcap: YES 00:02:50.745 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.745 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.745 Compiler for C supports arguments -Wformat: YES 00:02:50.745 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.745 Compiler for C supports arguments -Wformat-security: NO 00:02:50.745 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.745 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.745 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.745 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.745 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.745 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.745 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.745 Compiler for C supports arguments -Wundef: YES 00:02:50.745 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.745 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.745 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.745 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.745 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.745 Program objdump found: YES (/usr/bin/objdump) 00:02:50.745 Compiler for C supports arguments -mavx512f: YES 00:02:50.745 Checking if "AVX512 checking" compiles: YES 00:02:50.745 Fetching value of define "__SSE4_2__" : 1 00:02:50.745 Fetching value of define "__AES__" : 1 00:02:50.745 Fetching value of define "__AVX__" : 1 00:02:50.745 Fetching value of define "__AVX2__" : 1 00:02:50.745 Fetching value of define "__AVX512BW__" : (undefined) 00:02:50.745 Fetching value of define "__AVX512CD__" : (undefined) 00:02:50.745 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:50.745 Fetching value of define "__AVX512F__" : (undefined) 00:02:50.745 Fetching value of define "__AVX512VL__" : (undefined) 00:02:50.745 Fetching value of define "__PCLMUL__" : 1 00:02:50.745 Fetching value of define "__RDRND__" : 1 00:02:50.745 Fetching value of define "__RDSEED__" : 1 00:02:50.745 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.745 Fetching value of define "__znver1__" : (undefined) 00:02:50.745 Fetching value of define "__znver2__" : (undefined) 00:02:50.745 Fetching value of define "__znver3__" : (undefined) 00:02:50.745 Fetching value of define "__znver4__" : (undefined) 00:02:50.745 Library asan found: YES 00:02:50.745 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.745 Message: lib/log: Defining dependency "log" 00:02:50.745 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.745 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.745 Library rt found: YES 00:02:50.745 Checking for function "getentropy" : NO 00:02:50.745 Message: lib/eal: Defining dependency "eal" 00:02:50.745 Message: lib/ring: Defining dependency "ring" 00:02:50.745 Message: lib/rcu: Defining dependency "rcu" 00:02:50.745 Message: lib/mempool: Defining dependency "mempool" 00:02:50.745 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.745 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.745 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:50.745 Compiler for C supports arguments -mpclmul: YES 00:02:50.745 Compiler for C supports arguments -maes: YES 00:02:50.745 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.745 Compiler for C supports arguments -mavx512bw: YES 00:02:50.745 Compiler for C supports arguments -mavx512dq: YES 00:02:50.745 Compiler for C supports arguments -mavx512vl: YES 00:02:50.745 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.745 Compiler for C supports arguments -mavx2: YES 00:02:50.745 Compiler for C supports arguments -mavx: YES 00:02:50.745 Message: lib/net: Defining dependency "net" 00:02:50.745 Message: lib/meter: Defining dependency "meter" 00:02:50.745 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.745 Message: lib/pci: Defining dependency "pci" 00:02:50.746 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.746 Message: lib/hash: Defining dependency "hash" 00:02:50.746 Message: lib/timer: Defining dependency "timer" 00:02:50.746 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.746 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.746 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.746 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.746 Message: lib/power: Defining dependency "power" 00:02:50.746 Message: lib/reorder: Defining dependency "reorder" 00:02:50.746 Message: lib/security: Defining dependency "security" 00:02:50.746 Has header "linux/userfaultfd.h" : YES 00:02:50.746 Has header "linux/vduse.h" : YES 00:02:50.746 Message: lib/vhost: Defining dependency "vhost" 00:02:50.746 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.746 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.746 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.746 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.746 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.746 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.746 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.746 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.746 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.746 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.746 Program doxygen found: YES (/usr/bin/doxygen) 00:02:50.746 Configuring doxy-api-html.conf using configuration 00:02:50.746 Configuring doxy-api-man.conf using configuration 00:02:50.746 Program mandb found: YES (/usr/bin/mandb) 00:02:50.746 Program sphinx-build found: NO 00:02:50.746 Configuring rte_build_config.h using configuration 00:02:50.746 Message: 00:02:50.746 ================= 00:02:50.746 Applications Enabled 00:02:50.746 ================= 00:02:50.746 00:02:50.746 apps: 00:02:50.746 00:02:50.746 00:02:50.746 Message: 00:02:50.746 ================= 00:02:50.746 Libraries Enabled 00:02:50.746 ================= 00:02:50.746 00:02:50.746 libs: 00:02:50.746 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.746 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.746 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.746 00:02:50.746 Message: 00:02:50.746 =============== 00:02:50.746 Drivers Enabled 00:02:50.746 =============== 00:02:50.746 00:02:50.746 common: 00:02:50.746 00:02:50.746 bus: 00:02:50.746 pci, vdev, 00:02:50.746 mempool: 00:02:50.746 ring, 00:02:50.746 dma: 00:02:50.746 00:02:50.746 net: 00:02:50.746 00:02:50.746 crypto: 00:02:50.746 00:02:50.746 compress: 00:02:50.746 00:02:50.746 vdpa: 00:02:50.746 00:02:50.746 00:02:50.746 Message: 00:02:50.746 ================= 00:02:50.746 Content Skipped 00:02:50.746 ================= 00:02:50.746 00:02:50.746 apps: 00:02:50.746 dumpcap: explicitly disabled via build config 00:02:50.746 graph: explicitly disabled via build config 00:02:50.746 pdump: explicitly disabled via build config 00:02:50.746 proc-info: explicitly disabled via build config 00:02:50.746 test-acl: explicitly disabled via build config 00:02:50.746 test-bbdev: explicitly disabled via build config 00:02:50.746 test-cmdline: explicitly disabled via build config 00:02:50.746 test-compress-perf: explicitly disabled via build config 00:02:50.746 test-crypto-perf: explicitly disabled via build config 00:02:50.746 test-dma-perf: explicitly disabled via build config 00:02:50.746 test-eventdev: explicitly disabled via build config 00:02:50.746 test-fib: explicitly disabled via build config 00:02:50.746 test-flow-perf: explicitly disabled via build config 00:02:50.746 test-gpudev: explicitly disabled via build config 00:02:50.746 test-mldev: explicitly disabled via build config 00:02:50.746 test-pipeline: explicitly disabled via build config 00:02:50.746 test-pmd: explicitly disabled via build config 00:02:50.746 test-regex: explicitly disabled via build config 00:02:50.746 test-sad: explicitly disabled via build config 00:02:50.746 test-security-perf: explicitly disabled via build config 00:02:50.746 00:02:50.746 libs: 00:02:50.746 argparse: explicitly disabled via build config 00:02:50.746 metrics: explicitly disabled via build config 00:02:50.746 acl: explicitly disabled via build config 00:02:50.746 bbdev: explicitly disabled via build config 00:02:50.746 bitratestats: explicitly disabled via build config 00:02:50.746 bpf: explicitly disabled via build config 00:02:50.746 cfgfile: explicitly disabled via build config 00:02:50.746 distributor: explicitly disabled via build config 00:02:50.746 efd: explicitly disabled via build config 00:02:50.746 eventdev: explicitly disabled via build config 00:02:50.746 dispatcher: explicitly disabled via build config 00:02:50.746 gpudev: explicitly disabled via build config 00:02:50.746 gro: explicitly disabled via build config 00:02:50.746 gso: explicitly disabled via build config 00:02:50.746 ip_frag: explicitly disabled via build config 00:02:50.746 jobstats: explicitly disabled via build config 00:02:50.746 latencystats: explicitly disabled via build config 00:02:50.746 lpm: explicitly disabled via build config 00:02:50.746 member: explicitly disabled via build config 00:02:50.746 pcapng: explicitly disabled via build config 00:02:50.746 rawdev: explicitly disabled via build config 00:02:50.746 regexdev: explicitly disabled via build config 00:02:50.746 mldev: explicitly disabled via build config 00:02:50.746 rib: explicitly disabled via build config 00:02:50.746 sched: explicitly disabled via build config 00:02:50.746 stack: explicitly disabled via build config 00:02:50.746 ipsec: explicitly disabled via build config 00:02:50.746 pdcp: explicitly disabled via build config 00:02:50.746 fib: explicitly disabled via build config 00:02:50.746 port: explicitly disabled via build config 00:02:50.746 pdump: explicitly disabled via build config 00:02:50.746 table: explicitly disabled via build config 00:02:50.746 pipeline: explicitly disabled via build config 00:02:50.746 graph: explicitly disabled via build config 00:02:50.746 node: explicitly disabled via build config 00:02:50.746 00:02:50.746 drivers: 00:02:50.746 common/cpt: not in enabled drivers build config 00:02:50.746 common/dpaax: not in enabled drivers build config 00:02:50.746 common/iavf: not in enabled drivers build config 00:02:50.746 common/idpf: not in enabled drivers build config 00:02:50.746 common/ionic: not in enabled drivers build config 00:02:50.746 common/mvep: not in enabled drivers build config 00:02:50.746 common/octeontx: not in enabled drivers build config 00:02:50.746 bus/auxiliary: not in enabled drivers build config 00:02:50.746 bus/cdx: not in enabled drivers build config 00:02:50.746 bus/dpaa: not in enabled drivers build config 00:02:50.746 bus/fslmc: not in enabled drivers build config 00:02:50.746 bus/ifpga: not in enabled drivers build config 00:02:50.746 bus/platform: not in enabled drivers build config 00:02:50.746 bus/uacce: not in enabled drivers build config 00:02:50.746 bus/vmbus: not in enabled drivers build config 00:02:50.746 common/cnxk: not in enabled drivers build config 00:02:50.746 common/mlx5: not in enabled drivers build config 00:02:50.746 common/nfp: not in enabled drivers build config 00:02:50.746 common/nitrox: not in enabled drivers build config 00:02:50.746 common/qat: not in enabled drivers build config 00:02:50.746 common/sfc_efx: not in enabled drivers build config 00:02:50.746 mempool/bucket: not in enabled drivers build config 00:02:50.746 mempool/cnxk: not in enabled drivers build config 00:02:50.746 mempool/dpaa: not in enabled drivers build config 00:02:50.746 mempool/dpaa2: not in enabled drivers build config 00:02:50.746 mempool/octeontx: not in enabled drivers build config 00:02:50.746 mempool/stack: not in enabled drivers build config 00:02:50.746 dma/cnxk: not in enabled drivers build config 00:02:50.746 dma/dpaa: not in enabled drivers build config 00:02:50.746 dma/dpaa2: not in enabled drivers build config 00:02:50.746 dma/hisilicon: not in enabled drivers build config 00:02:50.746 dma/idxd: not in enabled drivers build config 00:02:50.746 dma/ioat: not in enabled drivers build config 00:02:50.746 dma/skeleton: not in enabled drivers build config 00:02:50.746 net/af_packet: not in enabled drivers build config 00:02:50.746 net/af_xdp: not in enabled drivers build config 00:02:50.746 net/ark: not in enabled drivers build config 00:02:50.746 net/atlantic: not in enabled drivers build config 00:02:50.746 net/avp: not in enabled drivers build config 00:02:50.746 net/axgbe: not in enabled drivers build config 00:02:50.746 net/bnx2x: not in enabled drivers build config 00:02:50.746 net/bnxt: not in enabled drivers build config 00:02:50.746 net/bonding: not in enabled drivers build config 00:02:50.746 net/cnxk: not in enabled drivers build config 00:02:50.746 net/cpfl: not in enabled drivers build config 00:02:50.746 net/cxgbe: not in enabled drivers build config 00:02:50.747 net/dpaa: not in enabled drivers build config 00:02:50.747 net/dpaa2: not in enabled drivers build config 00:02:50.747 net/e1000: not in enabled drivers build config 00:02:50.747 net/ena: not in enabled drivers build config 00:02:50.747 net/enetc: not in enabled drivers build config 00:02:50.747 net/enetfec: not in enabled drivers build config 00:02:50.747 net/enic: not in enabled drivers build config 00:02:50.747 net/failsafe: not in enabled drivers build config 00:02:50.747 net/fm10k: not in enabled drivers build config 00:02:50.747 net/gve: not in enabled drivers build config 00:02:50.747 net/hinic: not in enabled drivers build config 00:02:50.747 net/hns3: not in enabled drivers build config 00:02:50.747 net/i40e: not in enabled drivers build config 00:02:50.747 net/iavf: not in enabled drivers build config 00:02:50.747 net/ice: not in enabled drivers build config 00:02:50.747 net/idpf: not in enabled drivers build config 00:02:50.747 net/igc: not in enabled drivers build config 00:02:50.747 net/ionic: not in enabled drivers build config 00:02:50.747 net/ipn3ke: not in enabled drivers build config 00:02:50.747 net/ixgbe: not in enabled drivers build config 00:02:50.747 net/mana: not in enabled drivers build config 00:02:50.747 net/memif: not in enabled drivers build config 00:02:50.747 net/mlx4: not in enabled drivers build config 00:02:50.747 net/mlx5: not in enabled drivers build config 00:02:50.747 net/mvneta: not in enabled drivers build config 00:02:50.747 net/mvpp2: not in enabled drivers build config 00:02:50.747 net/netvsc: not in enabled drivers build config 00:02:50.747 net/nfb: not in enabled drivers build config 00:02:50.747 net/nfp: not in enabled drivers build config 00:02:50.747 net/ngbe: not in enabled drivers build config 00:02:50.747 net/null: not in enabled drivers build config 00:02:50.747 net/octeontx: not in enabled drivers build config 00:02:50.747 net/octeon_ep: not in enabled drivers build config 00:02:50.747 net/pcap: not in enabled drivers build config 00:02:50.747 net/pfe: not in enabled drivers build config 00:02:50.747 net/qede: not in enabled drivers build config 00:02:50.747 net/ring: not in enabled drivers build config 00:02:50.747 net/sfc: not in enabled drivers build config 00:02:50.747 net/softnic: not in enabled drivers build config 00:02:50.747 net/tap: not in enabled drivers build config 00:02:50.747 net/thunderx: not in enabled drivers build config 00:02:50.747 net/txgbe: not in enabled drivers build config 00:02:50.747 net/vdev_netvsc: not in enabled drivers build config 00:02:50.747 net/vhost: not in enabled drivers build config 00:02:50.747 net/virtio: not in enabled drivers build config 00:02:50.747 net/vmxnet3: not in enabled drivers build config 00:02:50.747 raw/*: missing internal dependency, "rawdev" 00:02:50.747 crypto/armv8: not in enabled drivers build config 00:02:50.747 crypto/bcmfs: not in enabled drivers build config 00:02:50.747 crypto/caam_jr: not in enabled drivers build config 00:02:50.747 crypto/ccp: not in enabled drivers build config 00:02:50.747 crypto/cnxk: not in enabled drivers build config 00:02:50.747 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.747 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.747 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.747 crypto/mlx5: not in enabled drivers build config 00:02:50.747 crypto/mvsam: not in enabled drivers build config 00:02:50.747 crypto/nitrox: not in enabled drivers build config 00:02:50.747 crypto/null: not in enabled drivers build config 00:02:50.747 crypto/octeontx: not in enabled drivers build config 00:02:50.747 crypto/openssl: not in enabled drivers build config 00:02:50.747 crypto/scheduler: not in enabled drivers build config 00:02:50.747 crypto/uadk: not in enabled drivers build config 00:02:50.747 crypto/virtio: not in enabled drivers build config 00:02:50.747 compress/isal: not in enabled drivers build config 00:02:50.747 compress/mlx5: not in enabled drivers build config 00:02:50.747 compress/nitrox: not in enabled drivers build config 00:02:50.747 compress/octeontx: not in enabled drivers build config 00:02:50.747 compress/zlib: not in enabled drivers build config 00:02:50.747 regex/*: missing internal dependency, "regexdev" 00:02:50.747 ml/*: missing internal dependency, "mldev" 00:02:50.747 vdpa/ifc: not in enabled drivers build config 00:02:50.747 vdpa/mlx5: not in enabled drivers build config 00:02:50.747 vdpa/nfp: not in enabled drivers build config 00:02:50.747 vdpa/sfc: not in enabled drivers build config 00:02:50.747 event/*: missing internal dependency, "eventdev" 00:02:50.747 baseband/*: missing internal dependency, "bbdev" 00:02:50.747 gpu/*: missing internal dependency, "gpudev" 00:02:50.747 00:02:50.747 00:02:50.747 Build targets in project: 85 00:02:50.747 00:02:50.747 DPDK 24.03.0 00:02:50.747 00:02:50.747 User defined options 00:02:50.747 buildtype : debug 00:02:50.747 default_library : shared 00:02:50.747 libdir : lib 00:02:50.747 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.747 b_sanitize : address 00:02:50.747 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.747 c_link_args : 00:02:50.747 cpu_instruction_set: native 00:02:50.747 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.747 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.747 enable_docs : false 00:02:50.747 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.747 enable_kmods : false 00:02:50.747 max_lcores : 128 00:02:50.747 tests : false 00:02:50.747 00:02:50.747 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.747 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.747 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.747 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.747 [3/268] Linking static target lib/librte_kvargs.a 00:02:50.747 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.747 [5/268] Linking static target lib/librte_log.a 00:02:50.747 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.747 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:51.006 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:51.006 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.006 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:51.006 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.266 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.266 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:51.266 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:51.526 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:51.785 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.785 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:51.785 [18/268] Linking target lib/librte_log.so.24.1 00:02:51.785 [19/268] Linking static target lib/librte_telemetry.a 00:02:52.044 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.044 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.044 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.044 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:52.302 [24/268] Linking target lib/librte_kvargs.so.24.1 00:02:52.302 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.302 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.302 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.302 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.560 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.560 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:52.560 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.817 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.817 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.075 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:53.075 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:53.333 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.333 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.333 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.333 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.333 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.333 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.591 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.591 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.591 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.591 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.849 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.849 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.107 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.364 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.364 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.364 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.623 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.623 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.623 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.881 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.881 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.881 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.139 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.139 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.139 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.139 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.397 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.397 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.657 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.657 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.657 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.915 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.915 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.173 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.173 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.173 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.430 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.430 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.430 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.686 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.686 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.686 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.944 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.944 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.944 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.200 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.200 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.200 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.458 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.458 [85/268] Linking static target lib/librte_ring.a 00:02:57.715 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.715 [87/268] Linking static target lib/librte_eal.a 00:02:57.973 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.973 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.973 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.231 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.231 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.231 [93/268] Linking static target lib/librte_mempool.a 00:02:58.231 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.231 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.231 [96/268] Linking static target lib/librte_rcu.a 00:02:58.231 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.798 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.055 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.056 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.056 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.314 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.314 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.314 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.314 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.314 [106/268] Linking static target lib/librte_mbuf.a 00:02:59.572 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.572 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.830 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.830 [110/268] Linking static target lib/librte_meter.a 00:02:59.830 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.830 [112/268] Linking static target lib/librte_net.a 00:03:00.397 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.397 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.397 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.397 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.397 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.655 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.655 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.966 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.225 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.483 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.742 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.742 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.742 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.000 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.000 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.000 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.000 [129/268] Linking static target lib/librte_pci.a 00:03:02.000 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.000 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.000 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.000 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.258 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.258 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.258 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.258 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:02.516 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.516 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.516 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.516 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.516 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.516 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.516 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:02.516 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.774 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.774 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.774 [148/268] Linking static target lib/librte_cmdline.a 00:03:03.032 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:03.032 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.290 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.290 [152/268] Linking static target lib/librte_ethdev.a 00:03:03.290 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.547 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.547 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.547 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.547 [157/268] Linking static target lib/librte_timer.a 00:03:03.803 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.060 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.060 [160/268] Linking static target lib/librte_compressdev.a 00:03:04.318 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.318 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.318 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.575 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.575 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.575 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.575 [167/268] Linking static target lib/librte_dmadev.a 00:03:04.575 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.575 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.575 [170/268] Linking static target lib/librte_hash.a 00:03:04.832 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:05.090 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.090 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:05.090 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.090 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:05.658 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.658 [177/268] Linking static target lib/librte_cryptodev.a 00:03:05.658 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.658 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.658 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:05.914 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.914 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:05.914 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:05.914 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.914 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.171 [186/268] Linking static target lib/librte_power.a 00:03:06.736 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.736 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.736 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.736 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.736 [191/268] Linking static target lib/librte_security.a 00:03:06.993 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.993 [193/268] Linking static target lib/librte_reorder.a 00:03:07.250 [194/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.507 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.507 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.764 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.764 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.021 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:08.021 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:08.021 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.278 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.278 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:08.278 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.536 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.793 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.793 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.793 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.793 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:08.793 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.050 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.050 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.050 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.050 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.050 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.050 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:09.308 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.308 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.308 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:09.308 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.308 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.566 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.566 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:09.566 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.566 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.566 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:09.824 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.756 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.756 [229/268] Linking target lib/librte_eal.so.24.1 00:03:11.075 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:11.075 [231/268] Linking target lib/librte_ring.so.24.1 00:03:11.075 [232/268] Linking target lib/librte_pci.so.24.1 00:03:11.075 [233/268] Linking target lib/librte_timer.so.24.1 00:03:11.075 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:11.075 [235/268] Linking target lib/librte_meter.so.24.1 00:03:11.075 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:11.075 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.075 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:11.075 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:11.075 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:11.075 [241/268] Linking target lib/librte_rcu.so.24.1 00:03:11.075 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:11.075 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:11.075 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:11.075 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:11.333 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:11.333 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:11.333 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:11.333 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:11.592 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:11.592 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:11.592 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:11.592 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:11.592 [254/268] Linking target lib/librte_net.so.24.1 00:03:11.852 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:11.852 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:11.852 [257/268] Linking target lib/librte_security.so.24.1 00:03:11.852 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:11.852 [259/268] Linking target lib/librte_hash.so.24.1 00:03:11.852 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.111 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:12.111 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:12.370 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:12.370 [264/268] Linking target lib/librte_power.so.24.1 00:03:15.653 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:15.653 [266/268] Linking static target lib/librte_vhost.a 00:03:17.027 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.028 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:17.028 INFO: autodetecting backend as ninja 00:03:17.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:18.400 CC lib/ut/ut.o 00:03:18.400 CC lib/ut_mock/mock.o 00:03:18.400 CC lib/log/log_flags.o 00:03:18.400 CC lib/log/log.o 00:03:18.400 CC lib/log/log_deprecated.o 00:03:18.400 LIB libspdk_ut.a 00:03:18.400 LIB libspdk_ut_mock.a 00:03:18.400 LIB libspdk_log.a 00:03:18.400 SO libspdk_ut.so.2.0 00:03:18.400 SO libspdk_ut_mock.so.6.0 00:03:18.400 SO libspdk_log.so.7.0 00:03:18.400 SYMLINK libspdk_ut.so 00:03:18.400 SYMLINK libspdk_ut_mock.so 00:03:18.400 SYMLINK libspdk_log.so 00:03:18.658 CC lib/util/base64.o 00:03:18.658 CC lib/util/bit_array.o 00:03:18.658 CC lib/util/crc16.o 00:03:18.658 CC lib/dma/dma.o 00:03:18.658 CC lib/util/cpuset.o 00:03:18.658 CC lib/util/crc32.o 00:03:18.658 CC lib/util/crc32c.o 00:03:18.658 CXX lib/trace_parser/trace.o 00:03:18.658 CC lib/ioat/ioat.o 00:03:18.917 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.917 CC lib/util/crc32_ieee.o 00:03:18.917 CC lib/util/crc64.o 00:03:18.917 CC lib/vfio_user/host/vfio_user.o 00:03:18.917 LIB libspdk_dma.a 00:03:18.917 CC lib/util/dif.o 00:03:18.917 CC lib/util/fd.o 00:03:18.917 CC lib/util/file.o 00:03:18.917 SO libspdk_dma.so.4.0 00:03:18.917 CC lib/util/hexlify.o 00:03:18.917 SYMLINK libspdk_dma.so 00:03:18.917 CC lib/util/iov.o 00:03:18.917 CC lib/util/math.o 00:03:19.175 CC lib/util/pipe.o 00:03:19.175 CC lib/util/strerror_tls.o 00:03:19.175 LIB libspdk_ioat.a 00:03:19.175 CC lib/util/string.o 00:03:19.175 LIB libspdk_vfio_user.a 00:03:19.175 SO libspdk_ioat.so.7.0 00:03:19.175 SO libspdk_vfio_user.so.5.0 00:03:19.175 CC lib/util/uuid.o 00:03:19.175 CC lib/util/fd_group.o 00:03:19.175 SYMLINK libspdk_ioat.so 00:03:19.175 CC lib/util/xor.o 00:03:19.175 CC lib/util/zipf.o 00:03:19.175 SYMLINK libspdk_vfio_user.so 00:03:19.741 LIB libspdk_util.a 00:03:19.741 SO libspdk_util.so.9.1 00:03:19.741 LIB libspdk_trace_parser.a 00:03:19.999 SO libspdk_trace_parser.so.5.0 00:03:20.000 SYMLINK libspdk_util.so 00:03:20.000 SYMLINK libspdk_trace_parser.so 00:03:20.000 CC lib/idxd/idxd.o 00:03:20.000 CC lib/idxd/idxd_user.o 00:03:20.000 CC lib/idxd/idxd_kernel.o 00:03:20.000 CC lib/conf/conf.o 00:03:20.000 CC lib/rdma_utils/rdma_utils.o 00:03:20.000 CC lib/rdma_provider/common.o 00:03:20.000 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:20.000 CC lib/json/json_parse.o 00:03:20.000 CC lib/vmd/vmd.o 00:03:20.000 CC lib/env_dpdk/env.o 00:03:20.258 CC lib/env_dpdk/memory.o 00:03:20.258 CC lib/env_dpdk/pci.o 00:03:20.258 LIB libspdk_rdma_provider.a 00:03:20.258 SO libspdk_rdma_provider.so.6.0 00:03:20.258 LIB libspdk_conf.a 00:03:20.258 CC lib/vmd/led.o 00:03:20.517 CC lib/json/json_util.o 00:03:20.517 SO libspdk_conf.so.6.0 00:03:20.517 SYMLINK libspdk_rdma_provider.so 00:03:20.517 LIB libspdk_rdma_utils.a 00:03:20.517 CC lib/env_dpdk/init.o 00:03:20.517 SYMLINK libspdk_conf.so 00:03:20.517 CC lib/json/json_write.o 00:03:20.517 SO libspdk_rdma_utils.so.1.0 00:03:20.517 CC lib/env_dpdk/threads.o 00:03:20.517 SYMLINK libspdk_rdma_utils.so 00:03:20.517 CC lib/env_dpdk/pci_ioat.o 00:03:20.775 CC lib/env_dpdk/pci_virtio.o 00:03:20.775 CC lib/env_dpdk/pci_vmd.o 00:03:20.775 CC lib/env_dpdk/pci_idxd.o 00:03:20.775 CC lib/env_dpdk/pci_event.o 00:03:20.775 LIB libspdk_json.a 00:03:20.775 CC lib/env_dpdk/sigbus_handler.o 00:03:20.775 LIB libspdk_idxd.a 00:03:20.775 SO libspdk_json.so.6.0 00:03:20.775 CC lib/env_dpdk/pci_dpdk.o 00:03:20.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:20.775 SO libspdk_idxd.so.12.0 00:03:21.036 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.036 SYMLINK libspdk_json.so 00:03:21.036 SYMLINK libspdk_idxd.so 00:03:21.036 LIB libspdk_vmd.a 00:03:21.036 SO libspdk_vmd.so.6.0 00:03:21.036 SYMLINK libspdk_vmd.so 00:03:21.297 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.297 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.297 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.556 LIB libspdk_jsonrpc.a 00:03:21.556 SO libspdk_jsonrpc.so.6.0 00:03:21.556 SYMLINK libspdk_jsonrpc.so 00:03:21.814 CC lib/rpc/rpc.o 00:03:22.072 LIB libspdk_env_dpdk.a 00:03:22.072 SO libspdk_env_dpdk.so.14.1 00:03:22.072 LIB libspdk_rpc.a 00:03:22.332 SO libspdk_rpc.so.6.0 00:03:22.332 SYMLINK libspdk_rpc.so 00:03:22.332 SYMLINK libspdk_env_dpdk.so 00:03:22.590 CC lib/keyring/keyring.o 00:03:22.590 CC lib/trace/trace.o 00:03:22.590 CC lib/keyring/keyring_rpc.o 00:03:22.590 CC lib/trace/trace_flags.o 00:03:22.590 CC lib/trace/trace_rpc.o 00:03:22.590 CC lib/notify/notify.o 00:03:22.590 CC lib/notify/notify_rpc.o 00:03:22.848 LIB libspdk_notify.a 00:03:22.848 SO libspdk_notify.so.6.0 00:03:22.848 LIB libspdk_keyring.a 00:03:22.848 SO libspdk_keyring.so.1.0 00:03:22.848 SYMLINK libspdk_notify.so 00:03:22.848 LIB libspdk_trace.a 00:03:22.848 SO libspdk_trace.so.10.0 00:03:22.848 SYMLINK libspdk_keyring.so 00:03:23.106 SYMLINK libspdk_trace.so 00:03:23.364 CC lib/thread/thread.o 00:03:23.364 CC lib/thread/iobuf.o 00:03:23.364 CC lib/sock/sock_rpc.o 00:03:23.364 CC lib/sock/sock.o 00:03:23.931 LIB libspdk_sock.a 00:03:23.931 SO libspdk_sock.so.10.0 00:03:23.931 SYMLINK libspdk_sock.so 00:03:24.499 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.499 CC lib/nvme/nvme_ctrlr.o 00:03:24.499 CC lib/nvme/nvme_fabric.o 00:03:24.499 CC lib/nvme/nvme_ns_cmd.o 00:03:24.499 CC lib/nvme/nvme_ns.o 00:03:24.499 CC lib/nvme/nvme_pcie.o 00:03:24.499 CC lib/nvme/nvme_pcie_common.o 00:03:24.499 CC lib/nvme/nvme_qpair.o 00:03:24.499 CC lib/nvme/nvme.o 00:03:25.065 CC lib/nvme/nvme_quirks.o 00:03:25.323 CC lib/nvme/nvme_transport.o 00:03:25.323 CC lib/nvme/nvme_discovery.o 00:03:25.323 LIB libspdk_thread.a 00:03:25.323 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:25.323 SO libspdk_thread.so.10.1 00:03:25.581 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:25.581 CC lib/nvme/nvme_tcp.o 00:03:25.581 CC lib/nvme/nvme_opal.o 00:03:25.581 SYMLINK libspdk_thread.so 00:03:25.581 CC lib/nvme/nvme_io_msg.o 00:03:25.581 CC lib/nvme/nvme_poll_group.o 00:03:25.839 CC lib/nvme/nvme_zns.o 00:03:26.098 CC lib/accel/accel.o 00:03:26.098 CC lib/nvme/nvme_stubs.o 00:03:26.098 CC lib/accel/accel_rpc.o 00:03:26.098 CC lib/accel/accel_sw.o 00:03:26.098 CC lib/nvme/nvme_auth.o 00:03:26.356 CC lib/nvme/nvme_cuse.o 00:03:26.356 CC lib/nvme/nvme_vfio_user.o 00:03:26.614 CC lib/nvme/nvme_rdma.o 00:03:26.873 CC lib/blob/blobstore.o 00:03:26.873 CC lib/init/json_config.o 00:03:27.151 CC lib/virtio/virtio.o 00:03:27.151 CC lib/vfu_tgt/tgt_endpoint.o 00:03:27.409 CC lib/init/subsystem.o 00:03:27.409 CC lib/init/subsystem_rpc.o 00:03:27.409 CC lib/init/rpc.o 00:03:27.409 CC lib/virtio/virtio_vhost_user.o 00:03:27.409 CC lib/blob/request.o 00:03:27.409 CC lib/blob/zeroes.o 00:03:27.409 CC lib/vfu_tgt/tgt_rpc.o 00:03:27.668 LIB libspdk_accel.a 00:03:27.668 LIB libspdk_init.a 00:03:27.668 SO libspdk_accel.so.15.1 00:03:27.668 SO libspdk_init.so.5.0 00:03:27.668 CC lib/blob/blob_bs_dev.o 00:03:27.668 CC lib/virtio/virtio_vfio_user.o 00:03:27.668 LIB libspdk_vfu_tgt.a 00:03:27.668 SYMLINK libspdk_accel.so 00:03:27.668 CC lib/virtio/virtio_pci.o 00:03:27.668 SO libspdk_vfu_tgt.so.3.0 00:03:27.668 SYMLINK libspdk_init.so 00:03:27.927 SYMLINK libspdk_vfu_tgt.so 00:03:27.927 CC lib/bdev/bdev.o 00:03:27.927 CC lib/bdev/bdev_rpc.o 00:03:27.927 CC lib/bdev/bdev_zone.o 00:03:27.927 CC lib/bdev/part.o 00:03:27.927 CC lib/bdev/scsi_nvme.o 00:03:28.184 LIB libspdk_virtio.a 00:03:28.184 CC lib/event/app.o 00:03:28.184 CC lib/event/reactor.o 00:03:28.184 SO libspdk_virtio.so.7.0 00:03:28.184 CC lib/event/log_rpc.o 00:03:28.184 SYMLINK libspdk_virtio.so 00:03:28.184 CC lib/event/app_rpc.o 00:03:28.184 CC lib/event/scheduler_static.o 00:03:28.184 LIB libspdk_nvme.a 00:03:28.443 SO libspdk_nvme.so.13.1 00:03:28.701 LIB libspdk_event.a 00:03:28.701 SO libspdk_event.so.14.0 00:03:28.701 SYMLINK libspdk_event.so 00:03:28.958 SYMLINK libspdk_nvme.so 00:03:31.488 LIB libspdk_blob.a 00:03:31.488 SO libspdk_blob.so.11.0 00:03:31.488 LIB libspdk_bdev.a 00:03:31.488 SYMLINK libspdk_blob.so 00:03:31.488 SO libspdk_bdev.so.15.1 00:03:31.746 SYMLINK libspdk_bdev.so 00:03:31.746 CC lib/lvol/lvol.o 00:03:31.746 CC lib/blobfs/blobfs.o 00:03:31.746 CC lib/blobfs/tree.o 00:03:32.004 CC lib/scsi/dev.o 00:03:32.004 CC lib/nbd/nbd.o 00:03:32.004 CC lib/nbd/nbd_rpc.o 00:03:32.004 CC lib/scsi/lun.o 00:03:32.004 CC lib/nvmf/ctrlr.o 00:03:32.004 CC lib/ublk/ublk.o 00:03:32.004 CC lib/ftl/ftl_core.o 00:03:32.004 CC lib/ftl/ftl_init.o 00:03:32.004 CC lib/scsi/port.o 00:03:32.263 CC lib/scsi/scsi.o 00:03:32.263 CC lib/scsi/scsi_bdev.o 00:03:32.263 CC lib/scsi/scsi_pr.o 00:03:32.263 CC lib/scsi/scsi_rpc.o 00:03:32.522 CC lib/scsi/task.o 00:03:32.522 LIB libspdk_nbd.a 00:03:32.522 SO libspdk_nbd.so.7.0 00:03:32.522 CC lib/ftl/ftl_layout.o 00:03:32.522 SYMLINK libspdk_nbd.so 00:03:32.522 CC lib/ftl/ftl_debug.o 00:03:32.522 CC lib/ftl/ftl_io.o 00:03:32.781 CC lib/ftl/ftl_sb.o 00:03:32.781 CC lib/ftl/ftl_l2p.o 00:03:32.781 LIB libspdk_scsi.a 00:03:33.039 CC lib/ftl/ftl_l2p_flat.o 00:03:33.039 CC lib/nvmf/ctrlr_discovery.o 00:03:33.039 SO libspdk_scsi.so.9.0 00:03:33.039 CC lib/ublk/ublk_rpc.o 00:03:33.039 LIB libspdk_lvol.a 00:03:33.039 CC lib/ftl/ftl_nv_cache.o 00:03:33.039 CC lib/nvmf/ctrlr_bdev.o 00:03:33.039 SO libspdk_lvol.so.10.0 00:03:33.039 SYMLINK libspdk_scsi.so 00:03:33.039 CC lib/ftl/ftl_band.o 00:03:33.039 CC lib/nvmf/subsystem.o 00:03:33.039 SYMLINK libspdk_lvol.so 00:03:33.297 LIB libspdk_blobfs.a 00:03:33.297 LIB libspdk_ublk.a 00:03:33.297 SO libspdk_blobfs.so.10.0 00:03:33.297 SO libspdk_ublk.so.3.0 00:03:33.297 CC lib/vhost/vhost.o 00:03:33.297 CC lib/iscsi/conn.o 00:03:33.556 SYMLINK libspdk_blobfs.so 00:03:33.556 CC lib/vhost/vhost_rpc.o 00:03:33.556 SYMLINK libspdk_ublk.so 00:03:33.556 CC lib/nvmf/nvmf.o 00:03:33.556 CC lib/nvmf/nvmf_rpc.o 00:03:33.556 CC lib/nvmf/transport.o 00:03:33.813 CC lib/nvmf/tcp.o 00:03:34.070 CC lib/vhost/vhost_scsi.o 00:03:34.328 CC lib/vhost/vhost_blk.o 00:03:34.328 CC lib/ftl/ftl_band_ops.o 00:03:34.586 CC lib/ftl/ftl_writer.o 00:03:34.586 CC lib/iscsi/init_grp.o 00:03:34.845 CC lib/nvmf/stubs.o 00:03:34.845 CC lib/ftl/ftl_rq.o 00:03:34.845 CC lib/ftl/ftl_reloc.o 00:03:35.103 CC lib/iscsi/iscsi.o 00:03:35.103 CC lib/ftl/ftl_l2p_cache.o 00:03:35.103 CC lib/vhost/rte_vhost_user.o 00:03:35.386 CC lib/nvmf/mdns_server.o 00:03:35.386 CC lib/nvmf/vfio_user.o 00:03:35.386 CC lib/nvmf/rdma.o 00:03:35.386 CC lib/ftl/ftl_p2l.o 00:03:35.644 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.644 CC lib/nvmf/auth.o 00:03:35.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.901 CC lib/iscsi/md5.o 00:03:36.159 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.159 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.159 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.416 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.416 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.416 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.673 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.673 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.673 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.673 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.673 CC lib/iscsi/param.o 00:03:36.984 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.984 CC lib/ftl/utils/ftl_conf.o 00:03:36.984 CC lib/ftl/utils/ftl_md.o 00:03:36.984 LIB libspdk_vhost.a 00:03:36.984 CC lib/ftl/utils/ftl_mempool.o 00:03:36.984 SO libspdk_vhost.so.8.0 00:03:36.984 CC lib/ftl/utils/ftl_bitmap.o 00:03:37.243 CC lib/ftl/utils/ftl_property.o 00:03:37.243 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:37.243 SYMLINK libspdk_vhost.so 00:03:37.243 CC lib/iscsi/portal_grp.o 00:03:37.243 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:37.243 CC lib/iscsi/tgt_node.o 00:03:37.243 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:37.243 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:37.501 CC lib/iscsi/iscsi_subsystem.o 00:03:37.501 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:37.501 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:37.501 CC lib/iscsi/iscsi_rpc.o 00:03:37.501 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:37.501 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:37.501 CC lib/iscsi/task.o 00:03:37.501 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:37.759 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:37.759 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:37.759 CC lib/ftl/base/ftl_base_dev.o 00:03:37.759 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.759 CC lib/ftl/ftl_trace.o 00:03:38.016 LIB libspdk_iscsi.a 00:03:38.017 LIB libspdk_ftl.a 00:03:38.017 SO libspdk_iscsi.so.8.0 00:03:38.287 LIB libspdk_nvmf.a 00:03:38.287 SO libspdk_ftl.so.9.0 00:03:38.287 SYMLINK libspdk_iscsi.so 00:03:38.545 SO libspdk_nvmf.so.18.1 00:03:38.803 SYMLINK libspdk_ftl.so 00:03:38.803 SYMLINK libspdk_nvmf.so 00:03:39.061 CC module/env_dpdk/env_dpdk_rpc.o 00:03:39.061 CC module/vfu_device/vfu_virtio.o 00:03:39.319 CC module/accel/iaa/accel_iaa.o 00:03:39.319 CC module/accel/error/accel_error.o 00:03:39.319 CC module/sock/posix/posix.o 00:03:39.319 CC module/keyring/file/keyring.o 00:03:39.319 CC module/accel/ioat/accel_ioat.o 00:03:39.319 CC module/accel/dsa/accel_dsa.o 00:03:39.319 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:39.319 CC module/blob/bdev/blob_bdev.o 00:03:39.319 LIB libspdk_env_dpdk_rpc.a 00:03:39.319 SO libspdk_env_dpdk_rpc.so.6.0 00:03:39.319 CC module/keyring/file/keyring_rpc.o 00:03:39.319 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.319 CC module/accel/dsa/accel_dsa_rpc.o 00:03:39.577 CC module/accel/ioat/accel_ioat_rpc.o 00:03:39.577 CC module/accel/iaa/accel_iaa_rpc.o 00:03:39.577 LIB libspdk_scheduler_dynamic.a 00:03:39.577 CC module/accel/error/accel_error_rpc.o 00:03:39.577 SO libspdk_scheduler_dynamic.so.4.0 00:03:39.577 LIB libspdk_keyring_file.a 00:03:39.577 CC module/vfu_device/vfu_virtio_blk.o 00:03:39.577 LIB libspdk_blob_bdev.a 00:03:39.577 LIB libspdk_accel_dsa.a 00:03:39.577 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.577 SO libspdk_keyring_file.so.1.0 00:03:39.577 SO libspdk_blob_bdev.so.11.0 00:03:39.577 LIB libspdk_accel_ioat.a 00:03:39.577 SO libspdk_accel_dsa.so.5.0 00:03:39.577 LIB libspdk_accel_iaa.a 00:03:39.577 SO libspdk_accel_ioat.so.6.0 00:03:39.577 SO libspdk_accel_iaa.so.3.0 00:03:39.577 SYMLINK libspdk_blob_bdev.so 00:03:39.834 SYMLINK libspdk_accel_dsa.so 00:03:39.834 SYMLINK libspdk_keyring_file.so 00:03:39.834 CC module/vfu_device/vfu_virtio_scsi.o 00:03:39.834 CC module/vfu_device/vfu_virtio_rpc.o 00:03:39.834 LIB libspdk_accel_error.a 00:03:39.834 SYMLINK libspdk_accel_ioat.so 00:03:39.834 SYMLINK libspdk_accel_iaa.so 00:03:39.834 SO libspdk_accel_error.so.2.0 00:03:39.834 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:39.834 SYMLINK libspdk_accel_error.so 00:03:40.091 CC module/keyring/linux/keyring.o 00:03:40.092 CC module/scheduler/gscheduler/gscheduler.o 00:03:40.092 LIB libspdk_scheduler_dpdk_governor.a 00:03:40.092 CC module/bdev/error/vbdev_error.o 00:03:40.092 CC module/bdev/delay/vbdev_delay.o 00:03:40.092 CC module/bdev/gpt/gpt.o 00:03:40.092 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:40.092 CC module/keyring/linux/keyring_rpc.o 00:03:40.092 LIB libspdk_scheduler_gscheduler.a 00:03:40.092 LIB libspdk_vfu_device.a 00:03:40.092 SO libspdk_scheduler_gscheduler.so.4.0 00:03:40.092 CC module/blobfs/bdev/blobfs_bdev.o 00:03:40.092 CC module/bdev/lvol/vbdev_lvol.o 00:03:40.092 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:40.390 CC module/bdev/error/vbdev_error_rpc.o 00:03:40.390 SO libspdk_vfu_device.so.3.0 00:03:40.390 SYMLINK libspdk_scheduler_gscheduler.so 00:03:40.390 LIB libspdk_sock_posix.a 00:03:40.390 LIB libspdk_keyring_linux.a 00:03:40.390 SO libspdk_sock_posix.so.6.0 00:03:40.390 SO libspdk_keyring_linux.so.1.0 00:03:40.390 CC module/bdev/gpt/vbdev_gpt.o 00:03:40.390 SYMLINK libspdk_vfu_device.so 00:03:40.390 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:40.390 SYMLINK libspdk_keyring_linux.so 00:03:40.390 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:40.390 SYMLINK libspdk_sock_posix.so 00:03:40.390 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:40.390 LIB libspdk_bdev_error.a 00:03:40.390 CC module/bdev/malloc/bdev_malloc.o 00:03:40.390 SO libspdk_bdev_error.so.6.0 00:03:40.648 SYMLINK libspdk_bdev_error.so 00:03:40.648 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:40.648 CC module/bdev/null/bdev_null.o 00:03:40.648 CC module/bdev/nvme/bdev_nvme.o 00:03:40.648 LIB libspdk_blobfs_bdev.a 00:03:40.648 LIB libspdk_bdev_delay.a 00:03:40.648 SO libspdk_bdev_delay.so.6.0 00:03:40.648 SO libspdk_blobfs_bdev.so.6.0 00:03:40.648 LIB libspdk_bdev_gpt.a 00:03:40.648 SO libspdk_bdev_gpt.so.6.0 00:03:40.648 SYMLINK libspdk_bdev_delay.so 00:03:40.648 CC module/bdev/null/bdev_null_rpc.o 00:03:40.906 CC module/bdev/passthru/vbdev_passthru.o 00:03:40.906 SYMLINK libspdk_blobfs_bdev.so 00:03:40.906 SYMLINK libspdk_bdev_gpt.so 00:03:40.906 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:40.906 LIB libspdk_bdev_malloc.a 00:03:40.906 SO libspdk_bdev_malloc.so.6.0 00:03:40.906 CC module/bdev/raid/bdev_raid.o 00:03:40.906 LIB libspdk_bdev_null.a 00:03:40.906 CC module/bdev/split/vbdev_split.o 00:03:40.906 LIB libspdk_bdev_lvol.a 00:03:41.164 SO libspdk_bdev_null.so.6.0 00:03:41.164 SYMLINK libspdk_bdev_malloc.so 00:03:41.164 SO libspdk_bdev_lvol.so.6.0 00:03:41.164 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:41.164 SYMLINK libspdk_bdev_null.so 00:03:41.164 CC module/bdev/raid/bdev_raid_rpc.o 00:03:41.164 CC module/bdev/aio/bdev_aio.o 00:03:41.164 SYMLINK libspdk_bdev_lvol.so 00:03:41.164 CC module/bdev/raid/bdev_raid_sb.o 00:03:41.164 CC module/bdev/ftl/bdev_ftl.o 00:03:41.421 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:41.422 CC module/bdev/raid/raid0.o 00:03:41.422 CC module/bdev/split/vbdev_split_rpc.o 00:03:41.422 CC module/bdev/raid/raid1.o 00:03:41.422 LIB libspdk_bdev_passthru.a 00:03:41.680 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:41.680 SO libspdk_bdev_passthru.so.6.0 00:03:41.680 CC module/bdev/aio/bdev_aio_rpc.o 00:03:41.680 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:41.680 LIB libspdk_bdev_split.a 00:03:41.680 SYMLINK libspdk_bdev_passthru.so 00:03:41.680 CC module/bdev/raid/concat.o 00:03:41.680 SO libspdk_bdev_split.so.6.0 00:03:41.680 CC module/bdev/nvme/nvme_rpc.o 00:03:41.680 LIB libspdk_bdev_zone_block.a 00:03:41.680 SO libspdk_bdev_zone_block.so.6.0 00:03:41.680 LIB libspdk_bdev_aio.a 00:03:41.680 SYMLINK libspdk_bdev_split.so 00:03:41.938 CC module/bdev/nvme/bdev_mdns_client.o 00:03:41.938 CC module/bdev/nvme/vbdev_opal.o 00:03:41.938 SO libspdk_bdev_aio.so.6.0 00:03:41.938 LIB libspdk_bdev_ftl.a 00:03:41.938 SYMLINK libspdk_bdev_zone_block.so 00:03:41.938 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:41.938 SO libspdk_bdev_ftl.so.6.0 00:03:41.938 CC module/bdev/iscsi/bdev_iscsi.o 00:03:41.938 SYMLINK libspdk_bdev_aio.so 00:03:41.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:41.938 SYMLINK libspdk_bdev_ftl.so 00:03:41.938 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:42.196 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:42.196 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:42.196 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:42.196 LIB libspdk_bdev_raid.a 00:03:42.454 SO libspdk_bdev_raid.so.6.0 00:03:42.454 LIB libspdk_bdev_iscsi.a 00:03:42.454 SO libspdk_bdev_iscsi.so.6.0 00:03:42.454 SYMLINK libspdk_bdev_raid.so 00:03:42.454 SYMLINK libspdk_bdev_iscsi.so 00:03:42.711 LIB libspdk_bdev_virtio.a 00:03:42.969 SO libspdk_bdev_virtio.so.6.0 00:03:42.969 SYMLINK libspdk_bdev_virtio.so 00:03:43.534 LIB libspdk_bdev_nvme.a 00:03:43.796 SO libspdk_bdev_nvme.so.7.0 00:03:43.796 SYMLINK libspdk_bdev_nvme.so 00:03:44.362 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:44.362 CC module/event/subsystems/vmd/vmd.o 00:03:44.362 CC module/event/subsystems/scheduler/scheduler.o 00:03:44.362 CC module/event/subsystems/iobuf/iobuf.o 00:03:44.362 CC module/event/subsystems/keyring/keyring.o 00:03:44.362 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:44.362 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:44.362 CC module/event/subsystems/sock/sock.o 00:03:44.362 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:44.362 LIB libspdk_event_scheduler.a 00:03:44.362 LIB libspdk_event_vmd.a 00:03:44.619 LIB libspdk_event_iobuf.a 00:03:44.619 LIB libspdk_event_vfu_tgt.a 00:03:44.619 SO libspdk_event_scheduler.so.4.0 00:03:44.619 LIB libspdk_event_sock.a 00:03:44.619 LIB libspdk_event_keyring.a 00:03:44.619 LIB libspdk_event_vhost_blk.a 00:03:44.619 SO libspdk_event_vmd.so.6.0 00:03:44.619 SO libspdk_event_iobuf.so.3.0 00:03:44.619 SO libspdk_event_vfu_tgt.so.3.0 00:03:44.619 SO libspdk_event_sock.so.5.0 00:03:44.619 SO libspdk_event_keyring.so.1.0 00:03:44.619 SO libspdk_event_vhost_blk.so.3.0 00:03:44.619 SYMLINK libspdk_event_scheduler.so 00:03:44.619 SYMLINK libspdk_event_vmd.so 00:03:44.619 SYMLINK libspdk_event_sock.so 00:03:44.619 SYMLINK libspdk_event_keyring.so 00:03:44.619 SYMLINK libspdk_event_iobuf.so 00:03:44.619 SYMLINK libspdk_event_vfu_tgt.so 00:03:44.619 SYMLINK libspdk_event_vhost_blk.so 00:03:44.877 CC module/event/subsystems/accel/accel.o 00:03:45.135 LIB libspdk_event_accel.a 00:03:45.135 SO libspdk_event_accel.so.6.0 00:03:45.135 SYMLINK libspdk_event_accel.so 00:03:45.393 CC module/event/subsystems/bdev/bdev.o 00:03:45.696 LIB libspdk_event_bdev.a 00:03:45.696 SO libspdk_event_bdev.so.6.0 00:03:45.696 SYMLINK libspdk_event_bdev.so 00:03:45.955 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:45.955 CC module/event/subsystems/scsi/scsi.o 00:03:45.955 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:45.955 CC module/event/subsystems/nbd/nbd.o 00:03:45.955 CC module/event/subsystems/ublk/ublk.o 00:03:46.213 LIB libspdk_event_nbd.a 00:03:46.213 LIB libspdk_event_ublk.a 00:03:46.213 SO libspdk_event_nbd.so.6.0 00:03:46.213 LIB libspdk_event_scsi.a 00:03:46.213 SO libspdk_event_ublk.so.3.0 00:03:46.213 SO libspdk_event_scsi.so.6.0 00:03:46.213 LIB libspdk_event_nvmf.a 00:03:46.213 SYMLINK libspdk_event_nbd.so 00:03:46.213 SYMLINK libspdk_event_ublk.so 00:03:46.213 SO libspdk_event_nvmf.so.6.0 00:03:46.213 SYMLINK libspdk_event_scsi.so 00:03:46.471 SYMLINK libspdk_event_nvmf.so 00:03:46.471 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:46.472 CC module/event/subsystems/iscsi/iscsi.o 00:03:46.729 LIB libspdk_event_vhost_scsi.a 00:03:46.729 LIB libspdk_event_iscsi.a 00:03:46.729 SO libspdk_event_vhost_scsi.so.3.0 00:03:46.729 SO libspdk_event_iscsi.so.6.0 00:03:46.729 SYMLINK libspdk_event_vhost_scsi.so 00:03:46.987 SYMLINK libspdk_event_iscsi.so 00:03:46.988 SO libspdk.so.6.0 00:03:46.988 SYMLINK libspdk.so 00:03:47.246 CC app/trace_record/trace_record.o 00:03:47.246 CXX app/trace/trace.o 00:03:47.246 CC app/iscsi_tgt/iscsi_tgt.o 00:03:47.246 CC test/thread/poller_perf/poller_perf.o 00:03:47.246 CC app/nvmf_tgt/nvmf_main.o 00:03:47.504 CC examples/util/zipf/zipf.o 00:03:47.504 CC examples/ioat/perf/perf.o 00:03:47.504 CC test/dma/test_dma/test_dma.o 00:03:47.504 CC test/app/bdev_svc/bdev_svc.o 00:03:47.504 LINK iscsi_tgt 00:03:47.504 LINK poller_perf 00:03:47.504 LINK zipf 00:03:47.761 LINK nvmf_tgt 00:03:47.761 LINK spdk_trace_record 00:03:47.761 LINK bdev_svc 00:03:47.761 LINK ioat_perf 00:03:47.761 LINK spdk_trace 00:03:47.761 LINK test_dma 00:03:48.019 CC examples/ioat/verify/verify.o 00:03:48.019 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.019 CC app/spdk_tgt/spdk_tgt.o 00:03:48.019 CC test/app/histogram_perf/histogram_perf.o 00:03:48.019 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:48.019 CC app/spdk_lspci/spdk_lspci.o 00:03:48.276 LINK verify 00:03:48.276 TEST_HEADER include/spdk/accel.h 00:03:48.276 CC examples/sock/hello_world/hello_sock.o 00:03:48.276 TEST_HEADER include/spdk/accel_module.h 00:03:48.276 TEST_HEADER include/spdk/assert.h 00:03:48.276 LINK histogram_perf 00:03:48.276 TEST_HEADER include/spdk/barrier.h 00:03:48.276 TEST_HEADER include/spdk/base64.h 00:03:48.276 TEST_HEADER include/spdk/bdev.h 00:03:48.276 TEST_HEADER include/spdk/bdev_module.h 00:03:48.276 TEST_HEADER include/spdk/bdev_zone.h 00:03:48.276 LINK interrupt_tgt 00:03:48.276 TEST_HEADER include/spdk/bit_array.h 00:03:48.276 TEST_HEADER include/spdk/bit_pool.h 00:03:48.276 TEST_HEADER include/spdk/blob_bdev.h 00:03:48.276 LINK spdk_tgt 00:03:48.276 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:48.276 LINK spdk_lspci 00:03:48.276 TEST_HEADER include/spdk/blobfs.h 00:03:48.276 TEST_HEADER include/spdk/blob.h 00:03:48.276 TEST_HEADER include/spdk/conf.h 00:03:48.276 TEST_HEADER include/spdk/config.h 00:03:48.276 TEST_HEADER include/spdk/cpuset.h 00:03:48.276 TEST_HEADER include/spdk/crc16.h 00:03:48.276 CC examples/thread/thread/thread_ex.o 00:03:48.276 TEST_HEADER include/spdk/crc32.h 00:03:48.276 TEST_HEADER include/spdk/crc64.h 00:03:48.276 TEST_HEADER include/spdk/dif.h 00:03:48.276 TEST_HEADER include/spdk/dma.h 00:03:48.276 TEST_HEADER include/spdk/endian.h 00:03:48.276 TEST_HEADER include/spdk/env_dpdk.h 00:03:48.276 TEST_HEADER include/spdk/env.h 00:03:48.276 TEST_HEADER include/spdk/event.h 00:03:48.276 TEST_HEADER include/spdk/fd_group.h 00:03:48.276 TEST_HEADER include/spdk/fd.h 00:03:48.276 TEST_HEADER include/spdk/file.h 00:03:48.276 TEST_HEADER include/spdk/ftl.h 00:03:48.276 TEST_HEADER include/spdk/gpt_spec.h 00:03:48.276 TEST_HEADER include/spdk/hexlify.h 00:03:48.276 TEST_HEADER include/spdk/histogram_data.h 00:03:48.276 TEST_HEADER include/spdk/idxd.h 00:03:48.276 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.276 TEST_HEADER include/spdk/init.h 00:03:48.276 TEST_HEADER include/spdk/ioat.h 00:03:48.276 TEST_HEADER include/spdk/ioat_spec.h 00:03:48.276 TEST_HEADER include/spdk/iscsi_spec.h 00:03:48.276 TEST_HEADER include/spdk/json.h 00:03:48.276 TEST_HEADER include/spdk/jsonrpc.h 00:03:48.276 TEST_HEADER include/spdk/keyring.h 00:03:48.276 TEST_HEADER include/spdk/keyring_module.h 00:03:48.276 TEST_HEADER include/spdk/likely.h 00:03:48.276 TEST_HEADER include/spdk/log.h 00:03:48.276 TEST_HEADER include/spdk/lvol.h 00:03:48.276 TEST_HEADER include/spdk/memory.h 00:03:48.276 TEST_HEADER include/spdk/mmio.h 00:03:48.276 TEST_HEADER include/spdk/nbd.h 00:03:48.276 TEST_HEADER include/spdk/notify.h 00:03:48.276 TEST_HEADER include/spdk/nvme.h 00:03:48.276 TEST_HEADER include/spdk/nvme_intel.h 00:03:48.276 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:48.276 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:48.276 TEST_HEADER include/spdk/nvme_spec.h 00:03:48.276 TEST_HEADER include/spdk/nvme_zns.h 00:03:48.276 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:48.276 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:48.276 TEST_HEADER include/spdk/nvmf.h 00:03:48.276 TEST_HEADER include/spdk/nvmf_spec.h 00:03:48.276 TEST_HEADER include/spdk/nvmf_transport.h 00:03:48.276 TEST_HEADER include/spdk/opal.h 00:03:48.276 TEST_HEADER include/spdk/opal_spec.h 00:03:48.276 TEST_HEADER include/spdk/pci_ids.h 00:03:48.276 TEST_HEADER include/spdk/pipe.h 00:03:48.276 TEST_HEADER include/spdk/queue.h 00:03:48.276 TEST_HEADER include/spdk/reduce.h 00:03:48.276 TEST_HEADER include/spdk/rpc.h 00:03:48.276 TEST_HEADER include/spdk/scheduler.h 00:03:48.276 TEST_HEADER include/spdk/scsi.h 00:03:48.276 TEST_HEADER include/spdk/scsi_spec.h 00:03:48.276 TEST_HEADER include/spdk/sock.h 00:03:48.276 TEST_HEADER include/spdk/stdinc.h 00:03:48.276 TEST_HEADER include/spdk/string.h 00:03:48.276 TEST_HEADER include/spdk/thread.h 00:03:48.533 TEST_HEADER include/spdk/trace.h 00:03:48.533 TEST_HEADER include/spdk/trace_parser.h 00:03:48.533 TEST_HEADER include/spdk/tree.h 00:03:48.533 TEST_HEADER include/spdk/ublk.h 00:03:48.533 TEST_HEADER include/spdk/util.h 00:03:48.533 TEST_HEADER include/spdk/uuid.h 00:03:48.533 TEST_HEADER include/spdk/version.h 00:03:48.533 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:48.533 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:48.533 TEST_HEADER include/spdk/vhost.h 00:03:48.533 TEST_HEADER include/spdk/vmd.h 00:03:48.533 TEST_HEADER include/spdk/xor.h 00:03:48.533 TEST_HEADER include/spdk/zipf.h 00:03:48.533 CXX test/cpp_headers/accel.o 00:03:48.533 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:48.533 LINK hello_sock 00:03:48.533 LINK thread 00:03:48.533 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:48.790 LINK nvme_fuzz 00:03:48.790 CC app/spdk_nvme_perf/perf.o 00:03:48.790 CXX test/cpp_headers/accel_module.o 00:03:48.790 CC examples/vmd/lsvmd/lsvmd.o 00:03:48.790 CC examples/idxd/perf/perf.o 00:03:48.790 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:49.048 CXX test/cpp_headers/assert.o 00:03:49.048 LINK lsvmd 00:03:49.048 CXX test/cpp_headers/barrier.o 00:03:49.048 CC app/spdk_nvme_identify/identify.o 00:03:49.048 CC app/spdk_nvme_discover/discovery_aer.o 00:03:49.305 CXX test/cpp_headers/base64.o 00:03:49.305 CC examples/vmd/led/led.o 00:03:49.305 LINK idxd_perf 00:03:49.305 LINK spdk_nvme_discover 00:03:49.305 CXX test/cpp_headers/bdev.o 00:03:49.305 LINK led 00:03:49.563 LINK vhost_fuzz 00:03:49.563 CC test/env/mem_callbacks/mem_callbacks.o 00:03:49.563 CXX test/cpp_headers/bdev_module.o 00:03:49.563 CC examples/nvme/hello_world/hello_world.o 00:03:49.563 CC examples/nvme/reconnect/reconnect.o 00:03:49.821 CC app/spdk_top/spdk_top.o 00:03:49.821 CC examples/accel/perf/accel_perf.o 00:03:49.821 CXX test/cpp_headers/bdev_zone.o 00:03:49.821 LINK hello_world 00:03:49.821 LINK spdk_nvme_perf 00:03:50.079 CXX test/cpp_headers/bit_array.o 00:03:50.079 LINK reconnect 00:03:50.079 LINK mem_callbacks 00:03:50.337 CXX test/cpp_headers/bit_pool.o 00:03:50.337 LINK spdk_nvme_identify 00:03:50.337 CC app/vhost/vhost.o 00:03:50.595 LINK accel_perf 00:03:50.595 CC test/env/vtophys/vtophys.o 00:03:50.595 CXX test/cpp_headers/blob_bdev.o 00:03:50.595 CXX test/cpp_headers/blobfs_bdev.o 00:03:50.595 CC examples/blob/hello_world/hello_blob.o 00:03:50.595 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.595 LINK vhost 00:03:50.853 LINK vtophys 00:03:50.853 CXX test/cpp_headers/blobfs.o 00:03:50.853 CC test/app/jsoncat/jsoncat.o 00:03:50.853 LINK hello_blob 00:03:51.111 CXX test/cpp_headers/blob.o 00:03:51.111 CC examples/nvme/arbitration/arbitration.o 00:03:51.111 LINK iscsi_fuzz 00:03:51.369 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.369 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.369 LINK jsoncat 00:03:51.369 CXX test/cpp_headers/conf.o 00:03:51.369 LINK nvme_manage 00:03:51.369 CC examples/blob/cli/blobcli.o 00:03:51.369 LINK env_dpdk_post_init 00:03:51.628 LINK spdk_top 00:03:51.628 CXX test/cpp_headers/config.o 00:03:51.628 CXX test/cpp_headers/cpuset.o 00:03:51.628 CXX test/cpp_headers/crc16.o 00:03:51.628 CC test/env/memory/memory_ut.o 00:03:51.628 LINK arbitration 00:03:51.628 LINK hello_bdev 00:03:51.885 CC test/app/stub/stub.o 00:03:51.885 CC test/env/pci/pci_ut.o 00:03:51.885 CXX test/cpp_headers/crc32.o 00:03:51.885 CC app/spdk_dd/spdk_dd.o 00:03:52.143 LINK blobcli 00:03:52.400 CXX test/cpp_headers/crc64.o 00:03:52.400 LINK stub 00:03:52.400 CC app/fio/nvme/fio_plugin.o 00:03:52.658 CC examples/nvme/hotplug/hotplug.o 00:03:52.658 CXX test/cpp_headers/dif.o 00:03:52.658 CXX test/cpp_headers/dma.o 00:03:52.658 LINK pci_ut 00:03:52.658 CC examples/bdev/bdevperf/bdevperf.o 00:03:52.916 CXX test/cpp_headers/endian.o 00:03:52.916 LINK spdk_dd 00:03:52.916 LINK hotplug 00:03:53.173 CC test/event/event_perf/event_perf.o 00:03:53.173 CC app/fio/bdev/fio_plugin.o 00:03:53.470 LINK event_perf 00:03:53.470 CXX test/cpp_headers/env_dpdk.o 00:03:53.470 CC test/rpc_client/rpc_client_test.o 00:03:53.470 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:53.470 CC test/nvme/aer/aer.o 00:03:53.727 LINK spdk_nvme 00:03:53.727 LINK memory_ut 00:03:53.727 LINK rpc_client_test 00:03:53.727 CXX test/cpp_headers/env.o 00:03:53.727 CC test/event/reactor/reactor.o 00:03:53.727 LINK cmb_copy 00:03:53.989 CC test/event/reactor_perf/reactor_perf.o 00:03:53.989 CXX test/cpp_headers/event.o 00:03:53.989 LINK reactor 00:03:53.989 CC examples/nvme/abort/abort.o 00:03:54.249 LINK spdk_bdev 00:03:54.249 CC test/event/app_repeat/app_repeat.o 00:03:54.249 LINK aer 00:03:54.249 CXX test/cpp_headers/fd_group.o 00:03:54.249 LINK reactor_perf 00:03:54.249 CC test/accel/dif/dif.o 00:03:54.249 LINK bdevperf 00:03:54.249 CXX test/cpp_headers/fd.o 00:03:54.507 LINK app_repeat 00:03:54.507 CC test/nvme/reset/reset.o 00:03:54.507 CC test/blobfs/mkfs/mkfs.o 00:03:54.507 CXX test/cpp_headers/file.o 00:03:54.507 CC test/event/scheduler/scheduler.o 00:03:54.508 LINK abort 00:03:54.765 LINK mkfs 00:03:54.765 CC test/lvol/esnap/esnap.o 00:03:54.765 CXX test/cpp_headers/ftl.o 00:03:54.765 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:54.765 CC test/nvme/sgl/sgl.o 00:03:54.765 LINK reset 00:03:54.765 LINK scheduler 00:03:55.023 CC test/nvme/e2edp/nvme_dp.o 00:03:55.023 LINK dif 00:03:55.023 CXX test/cpp_headers/gpt_spec.o 00:03:55.023 CXX test/cpp_headers/hexlify.o 00:03:55.023 LINK pmr_persistence 00:03:55.023 LINK sgl 00:03:55.023 CC test/nvme/overhead/overhead.o 00:03:55.023 CXX test/cpp_headers/histogram_data.o 00:03:55.282 CXX test/cpp_headers/idxd.o 00:03:55.282 CC test/nvme/err_injection/err_injection.o 00:03:55.282 CC test/nvme/startup/startup.o 00:03:55.282 LINK nvme_dp 00:03:55.282 CC test/nvme/reserve/reserve.o 00:03:55.540 CXX test/cpp_headers/idxd_spec.o 00:03:55.540 CC test/nvme/simple_copy/simple_copy.o 00:03:55.540 LINK err_injection 00:03:55.540 LINK startup 00:03:55.540 CC test/nvme/connect_stress/connect_stress.o 00:03:55.540 LINK overhead 00:03:55.540 CXX test/cpp_headers/init.o 00:03:55.540 LINK reserve 00:03:55.799 CC test/nvme/boot_partition/boot_partition.o 00:03:55.799 CC test/nvme/compliance/nvme_compliance.o 00:03:55.799 LINK connect_stress 00:03:55.799 CXX test/cpp_headers/ioat.o 00:03:55.799 CXX test/cpp_headers/ioat_spec.o 00:03:55.799 LINK simple_copy 00:03:55.799 CC test/nvme/fused_ordering/fused_ordering.o 00:03:55.799 LINK boot_partition 00:03:56.058 CXX test/cpp_headers/iscsi_spec.o 00:03:56.058 CXX test/cpp_headers/json.o 00:03:56.058 CXX test/cpp_headers/jsonrpc.o 00:03:56.058 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.058 LINK fused_ordering 00:03:56.317 CXX test/cpp_headers/keyring.o 00:03:56.317 CC examples/nvmf/nvmf/nvmf.o 00:03:56.317 LINK nvme_compliance 00:03:56.317 CC test/nvme/fdp/fdp.o 00:03:56.317 CC test/bdev/bdevio/bdevio.o 00:03:56.317 CXX test/cpp_headers/keyring_module.o 00:03:56.317 CXX test/cpp_headers/likely.o 00:03:56.317 LINK doorbell_aers 00:03:56.317 CXX test/cpp_headers/log.o 00:03:56.574 CXX test/cpp_headers/lvol.o 00:03:56.574 CXX test/cpp_headers/memory.o 00:03:56.574 CC test/nvme/cuse/cuse.o 00:03:56.574 CXX test/cpp_headers/mmio.o 00:03:56.574 CXX test/cpp_headers/nbd.o 00:03:56.574 LINK nvmf 00:03:56.574 CXX test/cpp_headers/notify.o 00:03:56.574 CXX test/cpp_headers/nvme.o 00:03:56.574 CXX test/cpp_headers/nvme_intel.o 00:03:56.574 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.832 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.832 LINK fdp 00:03:56.832 LINK bdevio 00:03:56.832 CXX test/cpp_headers/nvme_spec.o 00:03:56.832 CXX test/cpp_headers/nvme_zns.o 00:03:56.832 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.832 CXX test/cpp_headers/nvmf.o 00:03:56.832 CXX test/cpp_headers/nvmf_spec.o 00:03:56.832 CXX test/cpp_headers/nvmf_transport.o 00:03:57.090 CXX test/cpp_headers/opal.o 00:03:57.090 CXX test/cpp_headers/opal_spec.o 00:03:57.090 CXX test/cpp_headers/pci_ids.o 00:03:57.090 CXX test/cpp_headers/pipe.o 00:03:57.090 CXX test/cpp_headers/queue.o 00:03:57.090 CXX test/cpp_headers/reduce.o 00:03:57.090 CXX test/cpp_headers/rpc.o 00:03:57.090 CXX test/cpp_headers/scheduler.o 00:03:57.090 CXX test/cpp_headers/scsi.o 00:03:57.090 CXX test/cpp_headers/scsi_spec.o 00:03:57.348 CXX test/cpp_headers/sock.o 00:03:57.348 CXX test/cpp_headers/stdinc.o 00:03:57.348 CXX test/cpp_headers/string.o 00:03:57.348 CXX test/cpp_headers/thread.o 00:03:57.348 CXX test/cpp_headers/trace.o 00:03:57.348 CXX test/cpp_headers/trace_parser.o 00:03:57.348 CXX test/cpp_headers/tree.o 00:03:57.348 CXX test/cpp_headers/ublk.o 00:03:57.348 CXX test/cpp_headers/util.o 00:03:57.606 CXX test/cpp_headers/uuid.o 00:03:57.606 CXX test/cpp_headers/version.o 00:03:57.606 CXX test/cpp_headers/vfio_user_pci.o 00:03:57.606 CXX test/cpp_headers/vfio_user_spec.o 00:03:57.606 CXX test/cpp_headers/vhost.o 00:03:57.606 CXX test/cpp_headers/vmd.o 00:03:57.606 CXX test/cpp_headers/xor.o 00:03:57.606 CXX test/cpp_headers/zipf.o 00:03:58.172 LINK cuse 00:04:02.375 LINK esnap 00:04:02.633 00:04:02.633 real 1m28.096s 00:04:02.633 user 8m44.206s 00:04:02.633 sys 1m57.849s 00:04:02.633 00:23:07 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:02.633 00:23:07 make -- common/autotest_common.sh@10 -- $ set +x 00:04:02.633 ************************************ 00:04:02.633 END TEST make 00:04:02.633 ************************************ 00:04:02.633 00:23:07 -- common/autotest_common.sh@1142 -- $ return 0 00:04:02.633 00:23:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:02.633 00:23:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:02.633 00:23:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:02.633 00:23:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.633 00:23:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:02.633 00:23:07 -- pm/common@44 -- $ pid=5203 00:04:02.633 00:23:07 -- pm/common@50 -- $ kill -TERM 5203 00:04:02.633 00:23:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.633 00:23:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:02.633 00:23:07 -- pm/common@44 -- $ pid=5205 00:04:02.633 00:23:07 -- pm/common@50 -- $ kill -TERM 5205 00:04:02.633 00:23:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:02.633 00:23:07 -- nvmf/common.sh@7 -- # uname -s 00:04:02.633 00:23:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.633 00:23:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.633 00:23:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.633 00:23:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.633 00:23:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.633 00:23:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.633 00:23:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.633 00:23:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.633 00:23:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.633 00:23:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.633 00:23:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:04:02.633 00:23:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:04:02.633 00:23:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.633 00:23:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.633 00:23:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:02.633 00:23:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.633 00:23:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:02.633 00:23:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.633 00:23:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.633 00:23:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.633 00:23:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.633 00:23:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.633 00:23:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.633 00:23:07 -- paths/export.sh@5 -- # export PATH 00:04:02.633 00:23:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.633 00:23:07 -- nvmf/common.sh@47 -- # : 0 00:04:02.633 00:23:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:02.633 00:23:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:02.633 00:23:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.633 00:23:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.633 00:23:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.633 00:23:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:02.633 00:23:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:02.633 00:23:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:02.633 00:23:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:02.633 00:23:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:02.633 00:23:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:02.633 00:23:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:02.633 00:23:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:02.633 00:23:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:02.633 00:23:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:02.633 00:23:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:02.891 00:23:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:02.891 00:23:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:02.891 00:23:07 -- spdk/autotest.sh@48 -- # udevadm_pid=55355 00:04:02.891 00:23:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:02.891 00:23:07 -- pm/common@17 -- # local monitor 00:04:02.891 00:23:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.891 00:23:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.891 00:23:07 -- pm/common@25 -- # sleep 1 00:04:02.891 00:23:07 -- pm/common@21 -- # date +%s 00:04:02.891 00:23:07 -- pm/common@21 -- # date +%s 00:04:02.891 00:23:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:02.891 00:23:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720743787 00:04:02.891 00:23:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720743787 00:04:02.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720743787_collect-vmstat.pm.log 00:04:02.891 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720743787_collect-cpu-load.pm.log 00:04:03.825 00:23:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.825 00:23:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:03.825 00:23:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.825 00:23:08 -- common/autotest_common.sh@10 -- # set +x 00:04:03.825 00:23:08 -- spdk/autotest.sh@59 -- # create_test_list 00:04:03.825 00:23:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:03.826 00:23:08 -- common/autotest_common.sh@10 -- # set +x 00:04:03.826 00:23:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:03.826 00:23:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:03.826 00:23:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:03.826 00:23:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:03.826 00:23:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:03.826 00:23:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:03.826 00:23:08 -- common/autotest_common.sh@1455 -- # uname 00:04:03.826 00:23:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:03.826 00:23:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:03.826 00:23:08 -- common/autotest_common.sh@1475 -- # uname 00:04:03.826 00:23:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:03.826 00:23:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:03.826 00:23:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:03.826 00:23:08 -- spdk/autotest.sh@72 -- # hash lcov 00:04:03.826 00:23:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:03.826 00:23:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:03.826 --rc lcov_branch_coverage=1 00:04:03.826 --rc lcov_function_coverage=1 00:04:03.826 --rc genhtml_branch_coverage=1 00:04:03.826 --rc genhtml_function_coverage=1 00:04:03.826 --rc genhtml_legend=1 00:04:03.826 --rc geninfo_all_blocks=1 00:04:03.826 ' 00:04:03.826 00:23:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:03.826 --rc lcov_branch_coverage=1 00:04:03.826 --rc lcov_function_coverage=1 00:04:03.826 --rc genhtml_branch_coverage=1 00:04:03.826 --rc genhtml_function_coverage=1 00:04:03.826 --rc genhtml_legend=1 00:04:03.826 --rc geninfo_all_blocks=1 00:04:03.826 ' 00:04:03.826 00:23:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:03.826 --rc lcov_branch_coverage=1 00:04:03.826 --rc lcov_function_coverage=1 00:04:03.826 --rc genhtml_branch_coverage=1 00:04:03.826 --rc genhtml_function_coverage=1 00:04:03.826 --rc genhtml_legend=1 00:04:03.826 --rc geninfo_all_blocks=1 00:04:03.826 --no-external' 00:04:03.826 00:23:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:03.826 --rc lcov_branch_coverage=1 00:04:03.826 --rc lcov_function_coverage=1 00:04:03.826 --rc genhtml_branch_coverage=1 00:04:03.826 --rc genhtml_function_coverage=1 00:04:03.826 --rc genhtml_legend=1 00:04:03.826 --rc geninfo_all_blocks=1 00:04:03.826 --no-external' 00:04:03.826 00:23:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:04.084 lcov: LCOV version 1.14 00:04:04.084 00:23:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:22.246 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:22.246 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:34.439 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:34.439 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:34.440 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:34.440 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:34.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:36.970 00:23:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:36.970 00:23:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.970 00:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:36.970 00:23:41 -- spdk/autotest.sh@91 -- # rm -f 00:04:36.970 00:23:41 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.536 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:37.536 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.536 00:23:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:37.536 00:23:42 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.536 00:23:42 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.536 00:23:42 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.536 00:23:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.536 00:23:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.536 00:23:42 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.536 00:23:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.536 00:23:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.536 00:23:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.536 00:23:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:37.536 00:23:42 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:37.536 00:23:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.536 00:23:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.536 00:23:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.536 00:23:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:37.536 00:23:42 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:37.536 00:23:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.537 00:23:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.537 00:23:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.537 00:23:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:37.537 00:23:42 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:37.537 00:23:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.537 00:23:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.537 00:23:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:37.537 00:23:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.537 00:23:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.537 00:23:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:37.537 00:23:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:37.537 00:23:42 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.794 No valid GPT data, bailing 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # pt= 00:04:37.794 00:23:42 -- scripts/common.sh@392 -- # return 1 00:04:37.794 00:23:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.794 1+0 records in 00:04:37.794 1+0 records out 00:04:37.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051118 s, 205 MB/s 00:04:37.794 00:23:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.794 00:23:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.794 00:23:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:37.794 00:23:42 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:37.794 00:23:42 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:37.794 No valid GPT data, bailing 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # pt= 00:04:37.794 00:23:42 -- scripts/common.sh@392 -- # return 1 00:04:37.794 00:23:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:37.794 1+0 records in 00:04:37.794 1+0 records out 00:04:37.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386708 s, 271 MB/s 00:04:37.794 00:23:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.794 00:23:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.794 00:23:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:37.794 00:23:42 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:37.794 00:23:42 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:37.794 No valid GPT data, bailing 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:37.794 00:23:42 -- scripts/common.sh@391 -- # pt= 00:04:37.794 00:23:42 -- scripts/common.sh@392 -- # return 1 00:04:37.794 00:23:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:37.794 1+0 records in 00:04:37.794 1+0 records out 00:04:37.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360037 s, 291 MB/s 00:04:37.794 00:23:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.794 00:23:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.794 00:23:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:37.794 00:23:42 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:37.794 00:23:42 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:38.052 No valid GPT data, bailing 00:04:38.052 00:23:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:38.052 00:23:42 -- scripts/common.sh@391 -- # pt= 00:04:38.052 00:23:42 -- scripts/common.sh@392 -- # return 1 00:04:38.052 00:23:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:38.052 1+0 records in 00:04:38.052 1+0 records out 00:04:38.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00368068 s, 285 MB/s 00:04:38.052 00:23:42 -- spdk/autotest.sh@118 -- # sync 00:04:38.052 00:23:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.052 00:23:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.052 00:23:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.947 00:23:44 -- spdk/autotest.sh@124 -- # uname -s 00:04:39.947 00:23:44 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:39.947 00:23:44 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.947 00:23:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.947 00:23:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.947 00:23:44 -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 ************************************ 00:04:39.947 START TEST setup.sh 00:04:39.947 ************************************ 00:04:39.947 00:23:44 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.947 * Looking for test storage... 00:04:39.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.947 00:23:44 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:39.947 00:23:44 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.947 00:23:44 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.947 00:23:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.947 00:23:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.947 00:23:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 ************************************ 00:04:39.947 START TEST acl 00:04:39.947 ************************************ 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.947 * Looking for test storage... 00:04:39.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.947 00:23:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.947 00:23:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.948 00:23:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.948 00:23:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:39.948 00:23:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:39.948 00:23:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:39.948 00:23:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.948 00:23:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:39.948 00:23:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.948 00:23:44 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.879 00:23:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.879 00:23:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:40.880 00:23:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.880 00:23:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:40.880 00:23:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.880 00:23:45 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 Hugepages 00:04:41.443 node hugesize free / total 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 00:04:41.443 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:41.443 00:23:46 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.443 00:23:46 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.443 00:23:46 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.443 00:23:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.443 ************************************ 00:04:41.443 START TEST denied 00:04:41.443 ************************************ 00:04:41.443 00:23:46 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:41.443 00:23:46 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:41.443 00:23:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:41.443 00:23:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:41.443 00:23:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.443 00:23:46 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.377 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.377 00:23:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.944 00:04:42.944 real 0m1.476s 00:04:42.944 user 0m0.565s 00:04:42.944 sys 0m0.810s 00:04:42.944 00:23:47 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.944 00:23:47 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:42.944 ************************************ 00:04:42.944 END TEST denied 00:04:42.944 ************************************ 00:04:42.944 00:23:47 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:42.944 00:23:47 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.944 00:23:47 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.944 00:23:47 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.944 00:23:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.944 ************************************ 00:04:42.944 START TEST allowed 00:04:42.944 ************************************ 00:04:42.944 00:23:47 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:42.944 00:23:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:42.944 00:23:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:42.944 00:23:47 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:42.944 00:23:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.944 00:23:47 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.887 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.887 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.888 00:23:48 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.478 00:04:44.478 real 0m1.491s 00:04:44.478 user 0m0.651s 00:04:44.478 sys 0m0.831s 00:04:44.478 00:23:49 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.478 00:23:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:44.478 ************************************ 00:04:44.478 END TEST allowed 00:04:44.478 ************************************ 00:04:44.478 00:23:49 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:44.478 ************************************ 00:04:44.478 END TEST acl 00:04:44.478 ************************************ 00:04:44.478 00:04:44.478 real 0m4.667s 00:04:44.478 user 0m1.987s 00:04:44.478 sys 0m2.580s 00:04:44.478 00:23:49 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.478 00:23:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:44.738 00:23:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:44.738 00:23:49 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.738 00:23:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.738 00:23:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.738 00:23:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.738 ************************************ 00:04:44.738 START TEST hugepages 00:04:44.738 ************************************ 00:04:44.738 00:23:49 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.738 * Looking for test storage... 00:04:44.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5664228 kB' 'MemAvailable: 7395684 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 480148 kB' 'Inactive: 1569140 kB' 'Active(anon): 115048 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 106184 kB' 'Mapped: 51548 kB' 'Shmem: 10488 kB' 'KReclaimable: 68308 kB' 'Slab: 142292 kB' 'SReclaimable: 68308 kB' 'SUnreclaim: 73984 kB' 'KernelStack: 6796 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.739 00:23:49 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:44.739 00:23:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.739 00:23:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.739 00:23:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.739 ************************************ 00:04:44.739 START TEST default_setup 00:04:44.739 ************************************ 00:04:44.739 00:23:49 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:44.739 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.740 00:23:49 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.566 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.566 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7750772 kB' 'MemAvailable: 9482188 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496584 kB' 'Inactive: 1569152 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122608 kB' 'Mapped: 51436 kB' 'Shmem: 10464 kB' 'KReclaimable: 68208 kB' 'Slab: 142208 kB' 'SReclaimable: 68208 kB' 'SUnreclaim: 74000 kB' 'KernelStack: 6704 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.566 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.567 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7750524 kB' 'MemAvailable: 9481820 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496564 kB' 'Inactive: 1569152 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122364 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141828 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73864 kB' 'KernelStack: 6688 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.568 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7750524 kB' 'MemAvailable: 9481820 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496556 kB' 'Inactive: 1569152 kB' 'Active(anon): 131456 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122652 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141828 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73864 kB' 'KernelStack: 6688 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.831 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:45.832 nr_hugepages=1024 00:04:45.832 resv_hugepages=0 00:04:45.832 surplus_hugepages=0 00:04:45.832 anon_hugepages=0 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.832 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7750524 kB' 'MemAvailable: 9481820 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496520 kB' 'Inactive: 1569152 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141836 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73872 kB' 'KernelStack: 6672 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.833 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.834 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7750524 kB' 'MemUsed: 4491444 kB' 'SwapCached: 0 kB' 'Active: 496744 kB' 'Inactive: 1569152 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 184 kB' 'Writeback: 0 kB' 'FilePages: 1944724 kB' 'Mapped: 51320 kB' 'AnonPages: 122756 kB' 'Shmem: 10464 kB' 'KernelStack: 6688 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 141832 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.835 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.836 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.837 node0=1024 expecting 1024 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.837 00:04:45.837 real 0m1.031s 00:04:45.837 user 0m0.467s 00:04:45.837 sys 0m0.477s 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.837 00:23:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:45.837 ************************************ 00:04:45.837 END TEST default_setup 00:04:45.837 ************************************ 00:04:45.837 00:23:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:45.837 00:23:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:45.837 00:23:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.837 00:23:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.837 00:23:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.837 ************************************ 00:04:45.837 START TEST per_node_1G_alloc 00:04:45.837 ************************************ 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.837 00:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.095 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.095 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8799072 kB' 'MemAvailable: 10530380 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496808 kB' 'Inactive: 1569164 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 51724 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141848 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6676 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.364 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.365 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8799208 kB' 'MemAvailable: 10530516 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496360 kB' 'Inactive: 1569164 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122444 kB' 'Mapped: 51320 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141848 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6688 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.366 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.367 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8799208 kB' 'MemAvailable: 10530516 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496376 kB' 'Inactive: 1569164 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122716 kB' 'Mapped: 51320 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141848 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6688 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.368 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.369 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.370 nr_hugepages=512 00:04:46.370 resv_hugepages=0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.370 surplus_hugepages=0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.370 anon_hugepages=0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8799208 kB' 'MemAvailable: 10530516 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496316 kB' 'Inactive: 1569164 kB' 'Active(anon): 131216 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 51320 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141848 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6672 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.370 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.371 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8799208 kB' 'MemUsed: 3442760 kB' 'SwapCached: 0 kB' 'Active: 496600 kB' 'Inactive: 1569164 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1944728 kB' 'Mapped: 51320 kB' 'AnonPages: 122700 kB' 'Shmem: 10464 kB' 'KernelStack: 6688 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 141848 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.372 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.373 node0=512 expecting 512 00:04:46.373 ************************************ 00:04:46.373 END TEST per_node_1G_alloc 00:04:46.373 ************************************ 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.373 00:04:46.373 real 0m0.568s 00:04:46.373 user 0m0.288s 00:04:46.373 sys 0m0.282s 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.373 00:23:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.373 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.373 00:23:51 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:46.373 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.373 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.373 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.373 ************************************ 00:04:46.373 START TEST even_2G_alloc 00:04:46.373 ************************************ 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.373 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.945 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.945 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.945 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7758024 kB' 'MemAvailable: 9489332 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496896 kB' 'Inactive: 1569164 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123148 kB' 'Mapped: 51416 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141900 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73936 kB' 'KernelStack: 6660 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.946 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.947 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7758024 kB' 'MemAvailable: 9489332 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496396 kB' 'Inactive: 1569164 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141888 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6672 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.948 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.949 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7758024 kB' 'MemAvailable: 9489332 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496440 kB' 'Inactive: 1569164 kB' 'Active(anon): 131340 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141892 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73928 kB' 'KernelStack: 6688 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.950 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.951 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.952 nr_hugepages=1024 00:04:46.952 resv_hugepages=0 00:04:46.952 surplus_hugepages=0 00:04:46.952 anon_hugepages=0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759000 kB' 'MemAvailable: 9490308 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496436 kB' 'Inactive: 1569164 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122716 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141892 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73928 kB' 'KernelStack: 6688 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.952 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.953 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759512 kB' 'MemUsed: 4482456 kB' 'SwapCached: 0 kB' 'Active: 496356 kB' 'Inactive: 1569164 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1944728 kB' 'Mapped: 51316 kB' 'AnonPages: 122628 kB' 'Shmem: 10464 kB' 'KernelStack: 6672 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 141888 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.954 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.955 node0=1024 expecting 1024 00:04:46.955 ************************************ 00:04:46.955 END TEST even_2G_alloc 00:04:46.955 ************************************ 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.955 00:04:46.955 real 0m0.569s 00:04:46.955 user 0m0.265s 00:04:46.955 sys 0m0.299s 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.955 00:23:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.214 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.214 00:23:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:47.214 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.214 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.214 00:23:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.214 ************************************ 00:04:47.214 START TEST odd_alloc 00:04:47.214 ************************************ 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.214 00:23:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.475 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.475 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759432 kB' 'MemAvailable: 9490736 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 497164 kB' 'Inactive: 1569160 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 51608 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141968 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6724 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.475 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.476 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759180 kB' 'MemAvailable: 9490484 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496856 kB' 'Inactive: 1569160 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 51520 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141968 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6752 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.477 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759180 kB' 'MemAvailable: 9490484 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496784 kB' 'Inactive: 1569160 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 51520 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141964 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74000 kB' 'KernelStack: 6700 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.479 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.480 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.740 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.740 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.740 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.742 nr_hugepages=1025 00:04:47.742 resv_hugepages=0 00:04:47.742 surplus_hugepages=0 00:04:47.742 anon_hugepages=0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759180 kB' 'MemAvailable: 9490484 kB' 'Buffers: 2436 kB' 'Cached: 1942288 kB' 'SwapCached: 0 kB' 'Active: 496764 kB' 'Inactive: 1569160 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 51520 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 141960 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73996 kB' 'KernelStack: 6684 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.742 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.743 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7759180 kB' 'MemUsed: 4482788 kB' 'SwapCached: 0 kB' 'Active: 496792 kB' 'Inactive: 1569160 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1944724 kB' 'Mapped: 51520 kB' 'AnonPages: 122928 kB' 'Shmem: 10464 kB' 'KernelStack: 6668 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 141960 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 73996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.746 node0=1025 expecting 1025 00:04:47.746 ************************************ 00:04:47.746 END TEST odd_alloc 00:04:47.746 ************************************ 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:47.746 00:04:47.746 real 0m0.628s 00:04:47.746 user 0m0.305s 00:04:47.746 sys 0m0.301s 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.746 00:23:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.746 00:23:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.746 00:23:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:47.746 00:23:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.746 00:23:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.746 00:23:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.746 ************************************ 00:04:47.746 START TEST custom_alloc 00:04:47.746 ************************************ 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.746 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.747 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.044 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.044 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8811852 kB' 'MemAvailable: 10543160 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496556 kB' 'Inactive: 1569164 kB' 'Active(anon): 131456 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122788 kB' 'Mapped: 51360 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142024 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74060 kB' 'KernelStack: 6720 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.044 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.045 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8812240 kB' 'MemAvailable: 10543548 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496504 kB' 'Inactive: 1569164 kB' 'Active(anon): 131404 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122736 kB' 'Mapped: 51360 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142020 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74056 kB' 'KernelStack: 6688 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.046 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.305 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.306 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8812240 kB' 'MemAvailable: 10543548 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496420 kB' 'Inactive: 1569164 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142004 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74040 kB' 'KernelStack: 6672 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.307 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.308 nr_hugepages=512 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.308 resv_hugepages=0 00:04:48.308 surplus_hugepages=0 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.308 anon_hugepages=0 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8812240 kB' 'MemAvailable: 10543548 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496424 kB' 'Inactive: 1569164 kB' 'Active(anon): 131324 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 51316 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142004 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74040 kB' 'KernelStack: 6672 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.308 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.309 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8812504 kB' 'MemUsed: 3429464 kB' 'SwapCached: 0 kB' 'Active: 496596 kB' 'Inactive: 1569164 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1944728 kB' 'Mapped: 51316 kB' 'AnonPages: 122484 kB' 'Shmem: 10464 kB' 'KernelStack: 6688 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 141992 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.310 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.311 node0=512 expecting 512 00:04:48.311 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:48.312 00:23:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:48.312 00:04:48.312 real 0m0.524s 00:04:48.312 user 0m0.274s 00:04:48.312 sys 0m0.281s 00:04:48.312 00:23:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.312 00:23:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.312 ************************************ 00:04:48.312 END TEST custom_alloc 00:04:48.312 ************************************ 00:04:48.312 00:23:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:48.312 00:23:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:48.312 00:23:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.312 00:23:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.312 00:23:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.312 ************************************ 00:04:48.312 START TEST no_shrink_alloc 00:04:48.312 ************************************ 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.312 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.569 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.569 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.829 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761476 kB' 'MemAvailable: 9492784 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 497164 kB' 'Inactive: 1569164 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123172 kB' 'Mapped: 51408 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142024 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74060 kB' 'KernelStack: 6644 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.830 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761476 kB' 'MemAvailable: 9492784 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 497216 kB' 'Inactive: 1569164 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123224 kB' 'Mapped: 51408 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142032 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74068 kB' 'KernelStack: 6660 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.831 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.832 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761476 kB' 'MemAvailable: 9492784 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496812 kB' 'Inactive: 1569164 kB' 'Active(anon): 131712 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 51320 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142032 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74068 kB' 'KernelStack: 6688 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.833 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.834 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.835 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.836 nr_hugepages=1024 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.836 resv_hugepages=0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.836 surplus_hugepages=0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.836 anon_hugepages=0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761728 kB' 'MemAvailable: 9493036 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 496668 kB' 'Inactive: 1569164 kB' 'Active(anon): 131568 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122496 kB' 'Mapped: 51320 kB' 'Shmem: 10464 kB' 'KReclaimable: 67964 kB' 'Slab: 142008 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74044 kB' 'KernelStack: 6672 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761728 kB' 'MemUsed: 4480240 kB' 'SwapCached: 0 kB' 'Active: 496412 kB' 'Inactive: 1569164 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1944728 kB' 'Mapped: 51320 kB' 'AnonPages: 122504 kB' 'Shmem: 10464 kB' 'KernelStack: 6688 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67964 kB' 'Slab: 142000 kB' 'SReclaimable: 67964 kB' 'SUnreclaim: 74036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.838 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.839 node0=1024 expecting 1024 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.839 00:23:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.097 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.097 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.097 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.097 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7762164 kB' 'MemAvailable: 9493468 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 492996 kB' 'Inactive: 1569164 kB' 'Active(anon): 127896 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50952 kB' 'Shmem: 10464 kB' 'KReclaimable: 67956 kB' 'Slab: 141844 kB' 'SReclaimable: 67956 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6628 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.361 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.362 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7762164 kB' 'MemAvailable: 9493468 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 492664 kB' 'Inactive: 1569164 kB' 'Active(anon): 127564 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118672 kB' 'Mapped: 50952 kB' 'Shmem: 10464 kB' 'KReclaimable: 67956 kB' 'Slab: 141836 kB' 'SReclaimable: 67956 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6564 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.363 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761912 kB' 'MemAvailable: 9493216 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 491908 kB' 'Inactive: 1569164 kB' 'Active(anon): 126808 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118000 kB' 'Mapped: 50580 kB' 'Shmem: 10464 kB' 'KReclaimable: 67956 kB' 'Slab: 141828 kB' 'SReclaimable: 67956 kB' 'SUnreclaim: 73872 kB' 'KernelStack: 6560 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.364 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.365 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.366 nr_hugepages=1024 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.366 resv_hugepages=0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.366 surplus_hugepages=0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.366 anon_hugepages=0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761912 kB' 'MemAvailable: 9493216 kB' 'Buffers: 2436 kB' 'Cached: 1942292 kB' 'SwapCached: 0 kB' 'Active: 491932 kB' 'Inactive: 1569164 kB' 'Active(anon): 126832 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 118068 kB' 'Mapped: 50580 kB' 'Shmem: 10464 kB' 'KReclaimable: 67956 kB' 'Slab: 141828 kB' 'SReclaimable: 67956 kB' 'SUnreclaim: 73872 kB' 'KernelStack: 6592 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.366 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.367 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7761912 kB' 'MemUsed: 4480056 kB' 'SwapCached: 0 kB' 'Active: 492176 kB' 'Inactive: 1569164 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 365100 kB' 'Inactive(file): 1569164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1944728 kB' 'Mapped: 50580 kB' 'AnonPages: 118052 kB' 'Shmem: 10464 kB' 'KernelStack: 6592 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67956 kB' 'Slab: 141828 kB' 'SReclaimable: 67956 kB' 'SUnreclaim: 73872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.368 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.369 node0=1024 expecting 1024 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.369 00:04:49.369 real 0m1.056s 00:04:49.369 user 0m0.531s 00:04:49.369 sys 0m0.585s 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.369 00:23:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.369 ************************************ 00:04:49.369 END TEST no_shrink_alloc 00:04:49.369 ************************************ 00:04:49.369 00:23:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.369 00:23:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.369 00:04:49.369 real 0m4.814s 00:04:49.369 user 0m2.281s 00:04:49.370 sys 0m2.488s 00:04:49.370 00:23:54 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.370 00:23:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.370 ************************************ 00:04:49.370 END TEST hugepages 00:04:49.370 ************************************ 00:04:49.370 00:23:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:49.370 00:23:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.370 00:23:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.370 00:23:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.370 00:23:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.370 ************************************ 00:04:49.370 START TEST driver 00:04:49.370 ************************************ 00:04:49.370 00:23:54 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.628 * Looking for test storage... 00:04:49.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.628 00:23:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:49.628 00:23:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.628 00:23:54 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.197 00:23:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:50.197 00:23:54 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.197 00:23:54 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.197 00:23:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.197 ************************************ 00:04:50.197 START TEST guess_driver 00:04:50.197 ************************************ 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:50.197 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:50.197 Looking for driver=uio_pci_generic 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.197 00:23:54 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.763 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.020 00:23:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.585 00:04:51.585 real 0m1.444s 00:04:51.585 user 0m0.504s 00:04:51.585 sys 0m0.932s 00:04:51.585 00:23:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.585 ************************************ 00:04:51.585 END TEST guess_driver 00:04:51.585 ************************************ 00:04:51.585 00:23:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:51.585 00:23:56 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:51.585 00:04:51.585 real 0m2.115s 00:04:51.585 user 0m0.735s 00:04:51.585 sys 0m1.434s 00:04:51.585 00:23:56 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.585 ************************************ 00:04:51.585 END TEST driver 00:04:51.585 00:23:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:51.585 ************************************ 00:04:51.585 00:23:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:51.586 00:23:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.586 00:23:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.586 00:23:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.586 00:23:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.586 ************************************ 00:04:51.586 START TEST devices 00:04:51.586 ************************************ 00:04:51.586 00:23:56 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.844 * Looking for test storage... 00:04:51.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.844 00:23:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.844 00:23:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:51.844 00:23:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.844 00:23:56 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.408 00:23:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:52.408 00:23:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.409 00:23:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.409 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.409 00:23:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:52.409 00:23:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.666 No valid GPT data, bailing 00:04:52.666 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.666 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.666 00:23:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.666 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.666 00:23:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.666 00:23:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.666 00:23:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:52.667 No valid GPT data, bailing 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:52.667 No valid GPT data, bailing 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:52.667 00:23:57 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:52.667 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:52.667 00:23:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:52.667 No valid GPT data, bailing 00:04:52.925 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.925 00:23:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.925 00:23:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:52.925 00:23:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:52.925 00:23:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:52.925 00:23:57 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.925 00:23:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.925 00:23:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.925 00:23:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.925 00:23:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.925 ************************************ 00:04:52.925 START TEST nvme_mount 00:04:52.925 ************************************ 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.925 00:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.858 Creating new GPT entries in memory. 00:04:53.858 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.858 other utilities. 00:04:53.858 00:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.858 00:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.858 00:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.858 00:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.858 00:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:54.791 Creating new GPT entries in memory. 00:04:54.791 The operation has completed successfully. 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59592 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.791 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.049 00:23:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.305 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.561 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.561 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.561 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.561 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.561 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.818 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.818 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.818 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.818 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.818 00:24:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.075 00:24:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.332 00:24:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.590 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.847 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.847 00:04:56.847 real 0m3.997s 00:04:56.847 user 0m0.692s 00:04:56.847 sys 0m1.033s 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.847 00:24:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.848 ************************************ 00:04:56.848 END TEST nvme_mount 00:04:56.848 ************************************ 00:04:56.848 00:24:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:56.848 00:24:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.848 00:24:01 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.848 00:24:01 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.848 00:24:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:56.848 ************************************ 00:04:56.848 START TEST dm_mount 00:04:56.848 ************************************ 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.848 00:24:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.780 Creating new GPT entries in memory. 00:04:57.780 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.780 other utilities. 00:04:57.780 00:24:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.780 00:24:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.780 00:24:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.780 00:24:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.780 00:24:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:59.153 Creating new GPT entries in memory. 00:04:59.153 The operation has completed successfully. 00:04:59.153 00:24:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.153 00:24:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.153 00:24:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.153 00:24:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.153 00:24:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:00.087 The operation has completed successfully. 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60021 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:00.087 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.088 00:24:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.345 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.603 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.604 00:24:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.862 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.120 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.121 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.121 00:05:01.121 real 0m4.246s 00:05:01.121 user 0m0.474s 00:05:01.121 sys 0m0.728s 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.121 00:24:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.121 ************************************ 00:05:01.121 END TEST dm_mount 00:05:01.121 ************************************ 00:05:01.121 00:24:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.121 00:24:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.379 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.379 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.379 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.379 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.379 00:24:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.379 00:05:01.379 real 0m9.820s 00:05:01.379 user 0m1.831s 00:05:01.379 sys 0m2.371s 00:05:01.379 00:24:06 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.379 00:24:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.379 ************************************ 00:05:01.379 END TEST devices 00:05:01.379 ************************************ 00:05:01.379 00:24:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:01.688 00:05:01.688 real 0m21.701s 00:05:01.688 user 0m6.925s 00:05:01.688 sys 0m9.060s 00:05:01.688 00:24:06 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.688 00:24:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.688 ************************************ 00:05:01.688 END TEST setup.sh 00:05:01.688 ************************************ 00:05:01.688 00:24:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.688 00:24:06 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.254 Hugepages 00:05:02.254 node hugesize free / total 00:05:02.254 node0 1048576kB 0 / 0 00:05:02.254 node0 2048kB 2048 / 2048 00:05:02.254 00:05:02.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.254 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.513 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:02.513 00:24:07 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.513 00:24:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.513 00:24:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.513 00:24:07 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.079 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.079 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.337 00:24:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:04.270 00:24:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:04.270 00:24:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:04.270 00:24:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.270 00:24:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:04.270 00:24:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:04.270 00:24:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:04.270 00:24:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.270 00:24:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:04.270 00:24:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.270 00:24:09 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:04.270 00:24:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.270 00:24:09 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.787 Waiting for block devices as requested 00:05:04.787 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.787 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.787 00:24:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:04.787 00:24:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.787 00:24:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.787 00:24:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.787 00:24:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:04.787 00:24:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:04.787 00:24:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:04.787 00:24:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:04.787 00:24:09 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:04.787 00:24:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:04.787 00:24:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:05.046 00:24:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1557 -- # continue 00:05:05.046 00:24:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:05.046 00:24:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:05.046 00:24:09 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:05.046 00:24:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:05.046 00:24:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:05.046 00:24:09 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:05.046 00:24:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:05.046 00:24:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:05.046 00:24:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:05.046 00:24:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:05.046 00:24:09 -- common/autotest_common.sh@1557 -- # continue 00:05:05.046 00:24:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:05.046 00:24:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.046 00:24:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.046 00:24:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:05.046 00:24:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.046 00:24:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.046 00:24:09 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.870 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.870 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.870 00:24:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:05.870 00:24:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.870 00:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 00:24:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:05.870 00:24:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:05.870 00:24:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.870 00:24:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:05.870 00:24:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:05.870 00:24:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:05.870 00:24:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:05.870 00:24:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:05.870 00:24:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.870 00:24:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.870 00:24:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:05.870 00:24:10 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:05.870 00:24:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.870 00:24:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:05.870 00:24:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.870 00:24:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:05.870 00:24:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.870 00:24:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:05.870 00:24:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.870 00:24:10 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:05.870 00:24:10 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.870 00:24:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:05.870 00:24:10 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:05.870 00:24:10 -- common/autotest_common.sh@1593 -- # return 0 00:05:05.870 00:24:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:05.870 00:24:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:05.870 00:24:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.870 00:24:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.870 00:24:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.870 00:24:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.870 00:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 00:24:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:05.870 00:24:10 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.870 00:24:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.870 00:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.870 00:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:06.129 ************************************ 00:05:06.129 START TEST env 00:05:06.129 ************************************ 00:05:06.129 00:24:10 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:06.129 * Looking for test storage... 00:05:06.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:06.129 00:24:10 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.129 00:24:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.129 00:24:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.129 00:24:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.129 ************************************ 00:05:06.129 START TEST env_memory 00:05:06.129 ************************************ 00:05:06.129 00:24:10 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.129 00:05:06.129 00:05:06.129 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.129 http://cunit.sourceforge.net/ 00:05:06.129 00:05:06.129 00:05:06.129 Suite: memory 00:05:06.129 Test: alloc and free memory map ...[2024-07-12 00:24:10.984697] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.129 passed 00:05:06.129 Test: mem map translation ...[2024-07-12 00:24:11.047898] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.129 [2024-07-12 00:24:11.048062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.129 [2024-07-12 00:24:11.048196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.129 [2024-07-12 00:24:11.048261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.388 passed 00:05:06.388 Test: mem map registration ...[2024-07-12 00:24:11.148161] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.388 [2024-07-12 00:24:11.148276] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.388 passed 00:05:06.388 Test: mem map adjacent registrations ...passed 00:05:06.388 00:05:06.388 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.388 suites 1 1 n/a 0 0 00:05:06.388 tests 4 4 4 0 0 00:05:06.388 asserts 152 152 152 0 n/a 00:05:06.388 00:05:06.388 Elapsed time = 0.352 seconds 00:05:06.388 00:05:06.388 real 0m0.402s 00:05:06.388 user 0m0.361s 00:05:06.388 sys 0m0.034s 00:05:06.388 00:24:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.388 00:24:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.388 ************************************ 00:05:06.388 END TEST env_memory 00:05:06.388 ************************************ 00:05:06.646 00:24:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:06.646 00:24:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.646 00:24:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.646 00:24:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.646 00:24:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.646 ************************************ 00:05:06.646 START TEST env_vtophys 00:05:06.646 ************************************ 00:05:06.646 00:24:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.646 EAL: lib.eal log level changed from notice to debug 00:05:06.646 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.646 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.646 EAL: Maximum logical cores by configuration: 128 00:05:06.646 EAL: Detected CPU lcores: 10 00:05:06.646 EAL: Detected NUMA nodes: 1 00:05:06.646 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:06.646 EAL: Detected shared linkage of DPDK 00:05:06.646 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.646 EAL: Selected IOVA mode 'PA' 00:05:06.646 EAL: Probing VFIO support... 00:05:06.646 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.646 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.646 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.646 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.646 EAL: Setting up physically contiguous memory... 00:05:06.646 EAL: Setting maximum number of open files to 524288 00:05:06.646 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.646 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.646 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.646 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.646 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.646 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.646 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.646 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.646 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.646 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.646 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.646 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.646 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.646 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.646 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.646 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.646 EAL: Hugepages will be freed exactly as allocated. 00:05:06.646 EAL: No shared files mode enabled, IPC is disabled 00:05:06.646 EAL: No shared files mode enabled, IPC is disabled 00:05:06.646 EAL: TSC frequency is ~2200000 KHz 00:05:06.646 EAL: Main lcore 0 is ready (tid=7f9b4d846a40;cpuset=[0]) 00:05:06.646 EAL: Trying to obtain current memory policy. 00:05:06.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.646 EAL: Restoring previous memory policy: 0 00:05:06.646 EAL: request: mp_malloc_sync 00:05:06.646 EAL: No shared files mode enabled, IPC is disabled 00:05:06.646 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.646 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.646 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.646 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.646 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.904 00:05:06.904 00:05:06.904 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.904 http://cunit.sourceforge.net/ 00:05:06.904 00:05:06.904 00:05:06.904 Suite: components_suite 00:05:07.163 Test: vtophys_malloc_test ...passed 00:05:07.163 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.163 EAL: Restoring previous memory policy: 4 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.163 EAL: Trying to obtain current memory policy. 00:05:07.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.163 EAL: Restoring previous memory policy: 4 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.163 EAL: Trying to obtain current memory policy. 00:05:07.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.163 EAL: Restoring previous memory policy: 4 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.163 EAL: request: mp_malloc_sync 00:05:07.163 EAL: No shared files mode enabled, IPC is disabled 00:05:07.163 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.163 EAL: Trying to obtain current memory policy. 00:05:07.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.421 EAL: Restoring previous memory policy: 4 00:05:07.421 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.421 EAL: request: mp_malloc_sync 00:05:07.421 EAL: No shared files mode enabled, IPC is disabled 00:05:07.421 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.421 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.421 EAL: request: mp_malloc_sync 00:05:07.421 EAL: No shared files mode enabled, IPC is disabled 00:05:07.421 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.421 EAL: Trying to obtain current memory policy. 00:05:07.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.421 EAL: Restoring previous memory policy: 4 00:05:07.421 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.421 EAL: request: mp_malloc_sync 00:05:07.421 EAL: No shared files mode enabled, IPC is disabled 00:05:07.421 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.421 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.421 EAL: request: mp_malloc_sync 00:05:07.421 EAL: No shared files mode enabled, IPC is disabled 00:05:07.421 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.421 EAL: Trying to obtain current memory policy. 00:05:07.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.421 EAL: Restoring previous memory policy: 4 00:05:07.421 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.421 EAL: request: mp_malloc_sync 00:05:07.421 EAL: No shared files mode enabled, IPC is disabled 00:05:07.421 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.691 EAL: request: mp_malloc_sync 00:05:07.691 EAL: No shared files mode enabled, IPC is disabled 00:05:07.691 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.691 EAL: Trying to obtain current memory policy. 00:05:07.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.691 EAL: Restoring previous memory policy: 4 00:05:07.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.691 EAL: request: mp_malloc_sync 00:05:07.691 EAL: No shared files mode enabled, IPC is disabled 00:05:07.691 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.949 EAL: request: mp_malloc_sync 00:05:07.949 EAL: No shared files mode enabled, IPC is disabled 00:05:07.949 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.207 EAL: Trying to obtain current memory policy. 00:05:08.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.207 EAL: Restoring previous memory policy: 4 00:05:08.207 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.207 EAL: request: mp_malloc_sync 00:05:08.207 EAL: No shared files mode enabled, IPC is disabled 00:05:08.207 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.773 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.773 EAL: request: mp_malloc_sync 00:05:08.773 EAL: No shared files mode enabled, IPC is disabled 00:05:08.773 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.031 EAL: Trying to obtain current memory policy. 00:05:09.031 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.289 EAL: Restoring previous memory policy: 4 00:05:09.289 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.289 EAL: request: mp_malloc_sync 00:05:09.289 EAL: No shared files mode enabled, IPC is disabled 00:05:09.289 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.221 EAL: request: mp_malloc_sync 00:05:10.221 EAL: No shared files mode enabled, IPC is disabled 00:05:10.221 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.153 EAL: Trying to obtain current memory policy. 00:05:11.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.410 EAL: Restoring previous memory policy: 4 00:05:11.410 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.410 EAL: request: mp_malloc_sync 00:05:11.410 EAL: No shared files mode enabled, IPC is disabled 00:05:11.410 EAL: Heap on socket 0 was expanded by 1026MB 00:05:13.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.309 EAL: request: mp_malloc_sync 00:05:13.309 EAL: No shared files mode enabled, IPC is disabled 00:05:13.309 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.694 passed 00:05:14.694 00:05:14.694 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.694 suites 1 1 n/a 0 0 00:05:14.694 tests 2 2 2 0 0 00:05:14.694 asserts 5411 5411 5411 0 n/a 00:05:14.694 00:05:14.694 Elapsed time = 7.808 seconds 00:05:14.694 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.694 EAL: request: mp_malloc_sync 00:05:14.694 EAL: No shared files mode enabled, IPC is disabled 00:05:14.694 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.694 EAL: No shared files mode enabled, IPC is disabled 00:05:14.694 EAL: No shared files mode enabled, IPC is disabled 00:05:14.694 EAL: No shared files mode enabled, IPC is disabled 00:05:14.694 00:05:14.694 real 0m8.135s 00:05:14.694 user 0m6.947s 00:05:14.694 sys 0m1.020s 00:05:14.694 00:24:19 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.694 00:24:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.694 ************************************ 00:05:14.694 END TEST env_vtophys 00:05:14.694 ************************************ 00:05:14.694 00:24:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:14.694 00:24:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.694 00:24:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.694 00:24:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.694 00:24:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.694 ************************************ 00:05:14.694 START TEST env_pci 00:05:14.694 ************************************ 00:05:14.694 00:24:19 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.694 00:05:14.694 00:05:14.694 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.694 http://cunit.sourceforge.net/ 00:05:14.694 00:05:14.694 00:05:14.694 Suite: pci 00:05:14.694 Test: pci_hook ...[2024-07-12 00:24:19.582037] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61288 has claimed it 00:05:14.694 passed 00:05:14.694 00:05:14.694 EAL: Cannot find device (10000:00:01.0) 00:05:14.694 EAL: Failed to attach device on primary process 00:05:14.694 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.694 suites 1 1 n/a 0 0 00:05:14.694 tests 1 1 1 0 0 00:05:14.694 asserts 25 25 25 0 n/a 00:05:14.694 00:05:14.694 Elapsed time = 0.008 seconds 00:05:14.954 00:05:14.954 real 0m0.091s 00:05:14.954 user 0m0.044s 00:05:14.954 sys 0m0.045s 00:05:14.954 00:24:19 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.954 00:24:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.954 ************************************ 00:05:14.954 END TEST env_pci 00:05:14.954 ************************************ 00:05:14.954 00:24:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:14.954 00:24:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.954 00:24:19 env -- env/env.sh@15 -- # uname 00:05:14.954 00:24:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.954 00:24:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.954 00:24:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.954 00:24:19 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:14.954 00:24:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.954 00:24:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.954 ************************************ 00:05:14.954 START TEST env_dpdk_post_init 00:05:14.954 ************************************ 00:05:14.954 00:24:19 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.954 EAL: Detected CPU lcores: 10 00:05:14.954 EAL: Detected NUMA nodes: 1 00:05:14.954 EAL: Detected shared linkage of DPDK 00:05:14.954 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.954 EAL: Selected IOVA mode 'PA' 00:05:15.212 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.212 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:15.212 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:15.212 Starting DPDK initialization... 00:05:15.212 Starting SPDK post initialization... 00:05:15.212 SPDK NVMe probe 00:05:15.212 Attaching to 0000:00:10.0 00:05:15.212 Attaching to 0000:00:11.0 00:05:15.212 Attached to 0000:00:10.0 00:05:15.212 Attached to 0000:00:11.0 00:05:15.212 Cleaning up... 00:05:15.212 00:05:15.212 real 0m0.313s 00:05:15.212 user 0m0.098s 00:05:15.212 sys 0m0.114s 00:05:15.212 ************************************ 00:05:15.212 00:24:19 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.212 00:24:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.212 END TEST env_dpdk_post_init 00:05:15.212 ************************************ 00:05:15.212 00:24:20 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.212 00:24:20 env -- env/env.sh@26 -- # uname 00:05:15.212 00:24:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.212 00:24:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.212 00:24:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.212 00:24:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.212 00:24:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.212 ************************************ 00:05:15.212 START TEST env_mem_callbacks 00:05:15.212 ************************************ 00:05:15.212 00:24:20 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.212 EAL: Detected CPU lcores: 10 00:05:15.212 EAL: Detected NUMA nodes: 1 00:05:15.212 EAL: Detected shared linkage of DPDK 00:05:15.212 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.212 EAL: Selected IOVA mode 'PA' 00:05:15.470 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.470 00:05:15.470 00:05:15.470 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.470 http://cunit.sourceforge.net/ 00:05:15.470 00:05:15.470 00:05:15.470 Suite: memory 00:05:15.470 Test: test ... 00:05:15.470 register 0x200000200000 2097152 00:05:15.470 malloc 3145728 00:05:15.470 register 0x200000400000 4194304 00:05:15.470 buf 0x2000004fffc0 len 3145728 PASSED 00:05:15.470 malloc 64 00:05:15.470 buf 0x2000004ffec0 len 64 PASSED 00:05:15.470 malloc 4194304 00:05:15.470 register 0x200000800000 6291456 00:05:15.470 buf 0x2000009fffc0 len 4194304 PASSED 00:05:15.470 free 0x2000004fffc0 3145728 00:05:15.470 free 0x2000004ffec0 64 00:05:15.470 unregister 0x200000400000 4194304 PASSED 00:05:15.470 free 0x2000009fffc0 4194304 00:05:15.470 unregister 0x200000800000 6291456 PASSED 00:05:15.470 malloc 8388608 00:05:15.470 register 0x200000400000 10485760 00:05:15.470 buf 0x2000005fffc0 len 8388608 PASSED 00:05:15.470 free 0x2000005fffc0 8388608 00:05:15.470 unregister 0x200000400000 10485760 PASSED 00:05:15.470 passed 00:05:15.470 00:05:15.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.470 suites 1 1 n/a 0 0 00:05:15.470 tests 1 1 1 0 0 00:05:15.470 asserts 15 15 15 0 n/a 00:05:15.470 00:05:15.470 Elapsed time = 0.072 seconds 00:05:15.470 00:05:15.470 real 0m0.286s 00:05:15.470 user 0m0.110s 00:05:15.470 sys 0m0.074s 00:05:15.470 ************************************ 00:05:15.470 END TEST env_mem_callbacks 00:05:15.470 00:24:20 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.470 00:24:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.470 ************************************ 00:05:15.470 00:24:20 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.470 ************************************ 00:05:15.470 END TEST env 00:05:15.470 ************************************ 00:05:15.470 00:05:15.470 real 0m9.581s 00:05:15.470 user 0m7.687s 00:05:15.470 sys 0m1.495s 00:05:15.470 00:24:20 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.470 00:24:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.728 00:24:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.728 00:24:20 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:15.728 00:24:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.728 00:24:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.728 00:24:20 -- common/autotest_common.sh@10 -- # set +x 00:05:15.728 ************************************ 00:05:15.728 START TEST rpc 00:05:15.728 ************************************ 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:15.728 * Looking for test storage... 00:05:15.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.728 00:24:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=61407 00:05:15.728 00:24:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:15.728 00:24:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.728 00:24:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 61407 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@829 -- # '[' -z 61407 ']' 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.728 00:24:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 [2024-07-12 00:24:20.674349] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:15.986 [2024-07-12 00:24:20.674556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:05:15.986 [2024-07-12 00:24:20.850235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.246 [2024-07-12 00:24:21.123864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.246 [2024-07-12 00:24:21.123923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61407' to capture a snapshot of events at runtime. 00:05:16.246 [2024-07-12 00:24:21.123959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.246 [2024-07-12 00:24:21.123972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.246 [2024-07-12 00:24:21.123986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61407 for offline analysis/debug. 00:05:16.246 [2024-07-12 00:24:21.124039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.182 00:24:21 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.182 00:24:21 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.182 00:24:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.182 00:24:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.182 00:24:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.182 00:24:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.182 00:24:21 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.182 00:24:21 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.182 00:24:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.182 ************************************ 00:05:17.182 START TEST rpc_integrity 00:05:17.182 ************************************ 00:05:17.182 00:24:21 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:17.182 00:24:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.182 00:24:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.182 00:24:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.182 00:24:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.182 00:24:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.182 00:24:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.182 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.182 { 00:05:17.182 "aliases": [ 00:05:17.182 "e97fc6c4-77af-4195-95f2-3ccf64b49d22" 00:05:17.182 ], 00:05:17.182 "assigned_rate_limits": { 00:05:17.182 "r_mbytes_per_sec": 0, 00:05:17.182 "rw_ios_per_sec": 0, 00:05:17.182 "rw_mbytes_per_sec": 0, 00:05:17.182 "w_mbytes_per_sec": 0 00:05:17.182 }, 00:05:17.182 "block_size": 512, 00:05:17.182 "claimed": false, 00:05:17.182 "driver_specific": {}, 00:05:17.182 "memory_domains": [ 00:05:17.182 { 00:05:17.182 "dma_device_id": "system", 00:05:17.182 "dma_device_type": 1 00:05:17.182 }, 00:05:17.182 { 00:05:17.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.182 "dma_device_type": 2 00:05:17.182 } 00:05:17.182 ], 00:05:17.182 "name": "Malloc0", 00:05:17.182 "num_blocks": 16384, 00:05:17.182 "product_name": "Malloc disk", 00:05:17.182 "supported_io_types": { 00:05:17.182 "abort": true, 00:05:17.182 "compare": false, 00:05:17.182 "compare_and_write": false, 00:05:17.182 "copy": true, 00:05:17.182 "flush": true, 00:05:17.182 "get_zone_info": false, 00:05:17.182 "nvme_admin": false, 00:05:17.182 "nvme_io": false, 00:05:17.182 "nvme_io_md": false, 00:05:17.182 "nvme_iov_md": false, 00:05:17.182 "read": true, 00:05:17.182 "reset": true, 00:05:17.182 "seek_data": false, 00:05:17.182 "seek_hole": false, 00:05:17.182 "unmap": true, 00:05:17.182 "write": true, 00:05:17.182 "write_zeroes": true, 00:05:17.182 "zcopy": true, 00:05:17.182 "zone_append": false, 00:05:17.182 "zone_management": false 00:05:17.182 }, 00:05:17.182 "uuid": "e97fc6c4-77af-4195-95f2-3ccf64b49d22", 00:05:17.182 "zoned": false 00:05:17.182 } 00:05:17.182 ]' 00:05:17.182 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.441 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.441 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.441 [2024-07-12 00:24:22.123836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.441 [2024-07-12 00:24:22.123917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.441 [2024-07-12 00:24:22.123960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:17.441 [2024-07-12 00:24:22.123976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.441 [2024-07-12 00:24:22.126957] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.441 [2024-07-12 00:24:22.127005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.441 Passthru0 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.441 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.441 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.441 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.441 { 00:05:17.441 "aliases": [ 00:05:17.441 "e97fc6c4-77af-4195-95f2-3ccf64b49d22" 00:05:17.441 ], 00:05:17.441 "assigned_rate_limits": { 00:05:17.441 "r_mbytes_per_sec": 0, 00:05:17.441 "rw_ios_per_sec": 0, 00:05:17.441 "rw_mbytes_per_sec": 0, 00:05:17.441 "w_mbytes_per_sec": 0 00:05:17.441 }, 00:05:17.441 "block_size": 512, 00:05:17.441 "claim_type": "exclusive_write", 00:05:17.441 "claimed": true, 00:05:17.441 "driver_specific": {}, 00:05:17.441 "memory_domains": [ 00:05:17.441 { 00:05:17.441 "dma_device_id": "system", 00:05:17.441 "dma_device_type": 1 00:05:17.441 }, 00:05:17.441 { 00:05:17.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.441 "dma_device_type": 2 00:05:17.441 } 00:05:17.441 ], 00:05:17.441 "name": "Malloc0", 00:05:17.441 "num_blocks": 16384, 00:05:17.441 "product_name": "Malloc disk", 00:05:17.441 "supported_io_types": { 00:05:17.441 "abort": true, 00:05:17.441 "compare": false, 00:05:17.441 "compare_and_write": false, 00:05:17.441 "copy": true, 00:05:17.441 "flush": true, 00:05:17.441 "get_zone_info": false, 00:05:17.441 "nvme_admin": false, 00:05:17.441 "nvme_io": false, 00:05:17.441 "nvme_io_md": false, 00:05:17.441 "nvme_iov_md": false, 00:05:17.441 "read": true, 00:05:17.441 "reset": true, 00:05:17.441 "seek_data": false, 00:05:17.441 "seek_hole": false, 00:05:17.441 "unmap": true, 00:05:17.442 "write": true, 00:05:17.442 "write_zeroes": true, 00:05:17.442 "zcopy": true, 00:05:17.442 "zone_append": false, 00:05:17.442 "zone_management": false 00:05:17.442 }, 00:05:17.442 "uuid": "e97fc6c4-77af-4195-95f2-3ccf64b49d22", 00:05:17.442 "zoned": false 00:05:17.442 }, 00:05:17.442 { 00:05:17.442 "aliases": [ 00:05:17.442 "1af3aed5-fd85-512e-a2e4-107ce3ecb6b3" 00:05:17.442 ], 00:05:17.442 "assigned_rate_limits": { 00:05:17.442 "r_mbytes_per_sec": 0, 00:05:17.442 "rw_ios_per_sec": 0, 00:05:17.442 "rw_mbytes_per_sec": 0, 00:05:17.442 "w_mbytes_per_sec": 0 00:05:17.442 }, 00:05:17.442 "block_size": 512, 00:05:17.442 "claimed": false, 00:05:17.442 "driver_specific": { 00:05:17.442 "passthru": { 00:05:17.442 "base_bdev_name": "Malloc0", 00:05:17.442 "name": "Passthru0" 00:05:17.442 } 00:05:17.442 }, 00:05:17.442 "memory_domains": [ 00:05:17.442 { 00:05:17.442 "dma_device_id": "system", 00:05:17.442 "dma_device_type": 1 00:05:17.442 }, 00:05:17.442 { 00:05:17.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.442 "dma_device_type": 2 00:05:17.442 } 00:05:17.442 ], 00:05:17.442 "name": "Passthru0", 00:05:17.442 "num_blocks": 16384, 00:05:17.442 "product_name": "passthru", 00:05:17.442 "supported_io_types": { 00:05:17.442 "abort": true, 00:05:17.442 "compare": false, 00:05:17.442 "compare_and_write": false, 00:05:17.442 "copy": true, 00:05:17.442 "flush": true, 00:05:17.442 "get_zone_info": false, 00:05:17.442 "nvme_admin": false, 00:05:17.442 "nvme_io": false, 00:05:17.442 "nvme_io_md": false, 00:05:17.442 "nvme_iov_md": false, 00:05:17.442 "read": true, 00:05:17.442 "reset": true, 00:05:17.442 "seek_data": false, 00:05:17.442 "seek_hole": false, 00:05:17.442 "unmap": true, 00:05:17.442 "write": true, 00:05:17.442 "write_zeroes": true, 00:05:17.442 "zcopy": true, 00:05:17.442 "zone_append": false, 00:05:17.442 "zone_management": false 00:05:17.442 }, 00:05:17.442 "uuid": "1af3aed5-fd85-512e-a2e4-107ce3ecb6b3", 00:05:17.442 "zoned": false 00:05:17.442 } 00:05:17.442 ]' 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.442 00:24:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.442 00:05:17.442 real 0m0.358s 00:05:17.442 user 0m0.219s 00:05:17.442 sys 0m0.029s 00:05:17.442 ************************************ 00:05:17.442 END TEST rpc_integrity 00:05:17.442 ************************************ 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.442 00:24:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.442 00:24:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.442 00:24:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.442 00:24:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.442 00:24:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.442 00:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.442 ************************************ 00:05:17.442 START TEST rpc_plugins 00:05:17.442 ************************************ 00:05:17.442 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:17.442 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.442 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.442 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.714 { 00:05:17.714 "aliases": [ 00:05:17.714 "1ff0347b-4e1e-4e42-a319-53bcdf2cdcea" 00:05:17.714 ], 00:05:17.714 "assigned_rate_limits": { 00:05:17.714 "r_mbytes_per_sec": 0, 00:05:17.714 "rw_ios_per_sec": 0, 00:05:17.714 "rw_mbytes_per_sec": 0, 00:05:17.714 "w_mbytes_per_sec": 0 00:05:17.714 }, 00:05:17.714 "block_size": 4096, 00:05:17.714 "claimed": false, 00:05:17.714 "driver_specific": {}, 00:05:17.714 "memory_domains": [ 00:05:17.714 { 00:05:17.714 "dma_device_id": "system", 00:05:17.714 "dma_device_type": 1 00:05:17.714 }, 00:05:17.714 { 00:05:17.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.714 "dma_device_type": 2 00:05:17.714 } 00:05:17.714 ], 00:05:17.714 "name": "Malloc1", 00:05:17.714 "num_blocks": 256, 00:05:17.714 "product_name": "Malloc disk", 00:05:17.714 "supported_io_types": { 00:05:17.714 "abort": true, 00:05:17.714 "compare": false, 00:05:17.714 "compare_and_write": false, 00:05:17.714 "copy": true, 00:05:17.714 "flush": true, 00:05:17.714 "get_zone_info": false, 00:05:17.714 "nvme_admin": false, 00:05:17.714 "nvme_io": false, 00:05:17.714 "nvme_io_md": false, 00:05:17.714 "nvme_iov_md": false, 00:05:17.714 "read": true, 00:05:17.714 "reset": true, 00:05:17.714 "seek_data": false, 00:05:17.714 "seek_hole": false, 00:05:17.714 "unmap": true, 00:05:17.714 "write": true, 00:05:17.714 "write_zeroes": true, 00:05:17.714 "zcopy": true, 00:05:17.714 "zone_append": false, 00:05:17.714 "zone_management": false 00:05:17.714 }, 00:05:17.714 "uuid": "1ff0347b-4e1e-4e42-a319-53bcdf2cdcea", 00:05:17.714 "zoned": false 00:05:17.714 } 00:05:17.714 ]' 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.714 00:24:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.714 00:05:17.714 real 0m0.191s 00:05:17.714 user 0m0.123s 00:05:17.714 sys 0m0.023s 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.714 ************************************ 00:05:17.714 END TEST rpc_plugins 00:05:17.714 ************************************ 00:05:17.714 00:24:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.714 00:24:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.714 00:24:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.714 00:24:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.714 00:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 ************************************ 00:05:17.714 START TEST rpc_trace_cmd_test 00:05:17.714 ************************************ 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.714 "bdev": { 00:05:17.714 "mask": "0x8", 00:05:17.714 "tpoint_mask": "0xffffffffffffffff" 00:05:17.714 }, 00:05:17.714 "bdev_nvme": { 00:05:17.714 "mask": "0x4000", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "blobfs": { 00:05:17.714 "mask": "0x80", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "dsa": { 00:05:17.714 "mask": "0x200", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "ftl": { 00:05:17.714 "mask": "0x40", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "iaa": { 00:05:17.714 "mask": "0x1000", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "iscsi_conn": { 00:05:17.714 "mask": "0x2", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "nvme_pcie": { 00:05:17.714 "mask": "0x800", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "nvme_tcp": { 00:05:17.714 "mask": "0x2000", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "nvmf_rdma": { 00:05:17.714 "mask": "0x10", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "nvmf_tcp": { 00:05:17.714 "mask": "0x20", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "scsi": { 00:05:17.714 "mask": "0x4", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "sock": { 00:05:17.714 "mask": "0x8000", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "thread": { 00:05:17.714 "mask": "0x400", 00:05:17.714 "tpoint_mask": "0x0" 00:05:17.714 }, 00:05:17.714 "tpoint_group_mask": "0x8", 00:05:17.714 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61407" 00:05:17.714 }' 00:05:17.714 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.974 00:05:17.974 real 0m0.265s 00:05:17.974 user 0m0.229s 00:05:17.974 sys 0m0.028s 00:05:17.974 ************************************ 00:05:17.974 END TEST rpc_trace_cmd_test 00:05:17.974 ************************************ 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.974 00:24:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:18.232 00:24:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.232 00:24:22 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:18.232 00:24:22 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:18.232 00:24:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.232 00:24:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.232 00:24:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.232 ************************************ 00:05:18.232 START TEST go_rpc 00:05:18.232 ************************************ 00:05:18.232 00:24:22 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:18.232 00:24:22 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.232 00:24:22 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:18.232 00:24:22 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:18.232 00:24:22 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:18.232 00:24:22 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.232 00:24:22 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.232 00:24:22 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["27b06014-c914-4594-81aa-1f813960415d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"27b06014-c914-4594-81aa-1f813960415d","zoned":false}]' 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:18.233 00:24:23 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:18.233 00:05:18.233 real 0m0.239s 00:05:18.233 user 0m0.139s 00:05:18.233 sys 0m0.037s 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.233 00:24:23 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.233 ************************************ 00:05:18.233 END TEST go_rpc 00:05:18.233 ************************************ 00:05:18.491 00:24:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.491 00:24:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.491 00:24:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.491 00:24:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.491 00:24:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.491 00:24:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 ************************************ 00:05:18.491 START TEST rpc_daemon_integrity 00:05:18.491 ************************************ 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.491 { 00:05:18.491 "aliases": [ 00:05:18.491 "043e8d15-423f-4d33-9f7d-d905e74a72b1" 00:05:18.491 ], 00:05:18.491 "assigned_rate_limits": { 00:05:18.491 "r_mbytes_per_sec": 0, 00:05:18.491 "rw_ios_per_sec": 0, 00:05:18.491 "rw_mbytes_per_sec": 0, 00:05:18.491 "w_mbytes_per_sec": 0 00:05:18.491 }, 00:05:18.491 "block_size": 512, 00:05:18.491 "claimed": false, 00:05:18.491 "driver_specific": {}, 00:05:18.491 "memory_domains": [ 00:05:18.491 { 00:05:18.491 "dma_device_id": "system", 00:05:18.491 "dma_device_type": 1 00:05:18.491 }, 00:05:18.491 { 00:05:18.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.491 "dma_device_type": 2 00:05:18.491 } 00:05:18.491 ], 00:05:18.491 "name": "Malloc3", 00:05:18.491 "num_blocks": 16384, 00:05:18.491 "product_name": "Malloc disk", 00:05:18.491 "supported_io_types": { 00:05:18.491 "abort": true, 00:05:18.491 "compare": false, 00:05:18.491 "compare_and_write": false, 00:05:18.491 "copy": true, 00:05:18.491 "flush": true, 00:05:18.491 "get_zone_info": false, 00:05:18.491 "nvme_admin": false, 00:05:18.491 "nvme_io": false, 00:05:18.491 "nvme_io_md": false, 00:05:18.491 "nvme_iov_md": false, 00:05:18.491 "read": true, 00:05:18.491 "reset": true, 00:05:18.491 "seek_data": false, 00:05:18.491 "seek_hole": false, 00:05:18.491 "unmap": true, 00:05:18.491 "write": true, 00:05:18.491 "write_zeroes": true, 00:05:18.491 "zcopy": true, 00:05:18.491 "zone_append": false, 00:05:18.491 "zone_management": false 00:05:18.491 }, 00:05:18.491 "uuid": "043e8d15-423f-4d33-9f7d-d905e74a72b1", 00:05:18.491 "zoned": false 00:05:18.491 } 00:05:18.491 ]' 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 [2024-07-12 00:24:23.366371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:18.491 [2024-07-12 00:24:23.366495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.491 [2024-07-12 00:24:23.366530] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:18.491 [2024-07-12 00:24:23.366545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.491 [2024-07-12 00:24:23.369503] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.491 [2024-07-12 00:24:23.369547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.491 Passthru0 00:05:18.491 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.492 { 00:05:18.492 "aliases": [ 00:05:18.492 "043e8d15-423f-4d33-9f7d-d905e74a72b1" 00:05:18.492 ], 00:05:18.492 "assigned_rate_limits": { 00:05:18.492 "r_mbytes_per_sec": 0, 00:05:18.492 "rw_ios_per_sec": 0, 00:05:18.492 "rw_mbytes_per_sec": 0, 00:05:18.492 "w_mbytes_per_sec": 0 00:05:18.492 }, 00:05:18.492 "block_size": 512, 00:05:18.492 "claim_type": "exclusive_write", 00:05:18.492 "claimed": true, 00:05:18.492 "driver_specific": {}, 00:05:18.492 "memory_domains": [ 00:05:18.492 { 00:05:18.492 "dma_device_id": "system", 00:05:18.492 "dma_device_type": 1 00:05:18.492 }, 00:05:18.492 { 00:05:18.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.492 "dma_device_type": 2 00:05:18.492 } 00:05:18.492 ], 00:05:18.492 "name": "Malloc3", 00:05:18.492 "num_blocks": 16384, 00:05:18.492 "product_name": "Malloc disk", 00:05:18.492 "supported_io_types": { 00:05:18.492 "abort": true, 00:05:18.492 "compare": false, 00:05:18.492 "compare_and_write": false, 00:05:18.492 "copy": true, 00:05:18.492 "flush": true, 00:05:18.492 "get_zone_info": false, 00:05:18.492 "nvme_admin": false, 00:05:18.492 "nvme_io": false, 00:05:18.492 "nvme_io_md": false, 00:05:18.492 "nvme_iov_md": false, 00:05:18.492 "read": true, 00:05:18.492 "reset": true, 00:05:18.492 "seek_data": false, 00:05:18.492 "seek_hole": false, 00:05:18.492 "unmap": true, 00:05:18.492 "write": true, 00:05:18.492 "write_zeroes": true, 00:05:18.492 "zcopy": true, 00:05:18.492 "zone_append": false, 00:05:18.492 "zone_management": false 00:05:18.492 }, 00:05:18.492 "uuid": "043e8d15-423f-4d33-9f7d-d905e74a72b1", 00:05:18.492 "zoned": false 00:05:18.492 }, 00:05:18.492 { 00:05:18.492 "aliases": [ 00:05:18.492 "845ae218-329d-57e9-8fcb-0a903bf1bfdd" 00:05:18.492 ], 00:05:18.492 "assigned_rate_limits": { 00:05:18.492 "r_mbytes_per_sec": 0, 00:05:18.492 "rw_ios_per_sec": 0, 00:05:18.492 "rw_mbytes_per_sec": 0, 00:05:18.492 "w_mbytes_per_sec": 0 00:05:18.492 }, 00:05:18.492 "block_size": 512, 00:05:18.492 "claimed": false, 00:05:18.492 "driver_specific": { 00:05:18.492 "passthru": { 00:05:18.492 "base_bdev_name": "Malloc3", 00:05:18.492 "name": "Passthru0" 00:05:18.492 } 00:05:18.492 }, 00:05:18.492 "memory_domains": [ 00:05:18.492 { 00:05:18.492 "dma_device_id": "system", 00:05:18.492 "dma_device_type": 1 00:05:18.492 }, 00:05:18.492 { 00:05:18.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.492 "dma_device_type": 2 00:05:18.492 } 00:05:18.492 ], 00:05:18.492 "name": "Passthru0", 00:05:18.492 "num_blocks": 16384, 00:05:18.492 "product_name": "passthru", 00:05:18.492 "supported_io_types": { 00:05:18.492 "abort": true, 00:05:18.492 "compare": false, 00:05:18.492 "compare_and_write": false, 00:05:18.492 "copy": true, 00:05:18.492 "flush": true, 00:05:18.492 "get_zone_info": false, 00:05:18.492 "nvme_admin": false, 00:05:18.492 "nvme_io": false, 00:05:18.492 "nvme_io_md": false, 00:05:18.492 "nvme_iov_md": false, 00:05:18.492 "read": true, 00:05:18.492 "reset": true, 00:05:18.492 "seek_data": false, 00:05:18.492 "seek_hole": false, 00:05:18.492 "unmap": true, 00:05:18.492 "write": true, 00:05:18.492 "write_zeroes": true, 00:05:18.492 "zcopy": true, 00:05:18.492 "zone_append": false, 00:05:18.492 "zone_management": false 00:05:18.492 }, 00:05:18.492 "uuid": "845ae218-329d-57e9-8fcb-0a903bf1bfdd", 00:05:18.492 "zoned": false 00:05:18.492 } 00:05:18.492 ]' 00:05:18.492 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.751 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.751 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.751 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.751 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.752 00:05:18.752 real 0m0.343s 00:05:18.752 user 0m0.215s 00:05:18.752 sys 0m0.032s 00:05:18.752 ************************************ 00:05:18.752 END TEST rpc_daemon_integrity 00:05:18.752 ************************************ 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.752 00:24:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.752 00:24:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.752 00:24:23 rpc -- rpc/rpc.sh@84 -- # killprocess 61407 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 61407 ']' 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@952 -- # kill -0 61407 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@953 -- # uname 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61407 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.752 killing process with pid 61407 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61407' 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@967 -- # kill 61407 00:05:18.752 00:24:23 rpc -- common/autotest_common.sh@972 -- # wait 61407 00:05:21.284 00:05:21.284 real 0m5.502s 00:05:21.284 user 0m6.341s 00:05:21.284 sys 0m0.950s 00:05:21.284 00:24:25 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.284 00:24:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 ************************************ 00:05:21.284 END TEST rpc 00:05:21.284 ************************************ 00:05:21.284 00:24:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.284 00:24:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.284 00:24:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.284 00:24:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.284 00:24:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 ************************************ 00:05:21.284 START TEST skip_rpc 00:05:21.284 ************************************ 00:05:21.284 00:24:25 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.284 * Looking for test storage... 00:05:21.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.284 00:24:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.284 00:24:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.284 00:24:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.284 00:24:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.284 00:24:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.284 00:24:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.284 ************************************ 00:05:21.284 START TEST skip_rpc 00:05:21.284 ************************************ 00:05:21.284 00:24:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:21.284 00:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61692 00:05:21.284 00:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.284 00:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.284 00:24:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.542 [2024-07-12 00:24:26.230593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.542 [2024-07-12 00:24:26.230789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61692 ] 00:05:21.542 [2024-07-12 00:24:26.409174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.800 [2024-07-12 00:24:26.672243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.066 2024/07/12 00:24:31 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61692 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 61692 ']' 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 61692 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61692 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.066 killing process with pid 61692 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61692' 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 61692 00:05:27.066 00:24:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 61692 00:05:28.964 00:05:28.964 real 0m7.375s 00:05:28.964 user 0m6.767s 00:05:28.964 sys 0m0.489s 00:05:28.964 00:24:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.964 00:24:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.964 ************************************ 00:05:28.964 END TEST skip_rpc 00:05:28.964 ************************************ 00:05:28.964 00:24:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.964 00:24:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.964 00:24:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.964 00:24:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.964 00:24:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.964 ************************************ 00:05:28.964 START TEST skip_rpc_with_json 00:05:28.964 ************************************ 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61808 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61808 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61808 ']' 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.964 00:24:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.964 [2024-07-12 00:24:33.641216] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:28.964 [2024-07-12 00:24:33.641471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:05:28.964 [2024-07-12 00:24:33.817593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.222 [2024-07-12 00:24:34.098593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 [2024-07-12 00:24:34.942322] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:30.157 2024/07/12 00:24:34 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:30.157 request: 00:05:30.157 { 00:05:30.157 "method": "nvmf_get_transports", 00:05:30.157 "params": { 00:05:30.157 "trtype": "tcp" 00:05:30.157 } 00:05:30.157 } 00:05:30.157 Got JSON-RPC error response 00:05:30.157 GoRPCClient: error on JSON-RPC call 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 [2024-07-12 00:24:34.954446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.157 00:24:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.415 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.415 00:24:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.415 { 00:05:30.415 "subsystems": [ 00:05:30.415 { 00:05:30.415 "subsystem": "vfio_user_target", 00:05:30.415 "config": null 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "keyring", 00:05:30.415 "config": [] 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "iobuf", 00:05:30.415 "config": [ 00:05:30.415 { 00:05:30.415 "method": "iobuf_set_options", 00:05:30.415 "params": { 00:05:30.415 "large_bufsize": 135168, 00:05:30.415 "large_pool_count": 1024, 00:05:30.415 "small_bufsize": 8192, 00:05:30.415 "small_pool_count": 8192 00:05:30.415 } 00:05:30.415 } 00:05:30.415 ] 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "sock", 00:05:30.415 "config": [ 00:05:30.415 { 00:05:30.415 "method": "sock_set_default_impl", 00:05:30.415 "params": { 00:05:30.415 "impl_name": "posix" 00:05:30.415 } 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "method": "sock_impl_set_options", 00:05:30.415 "params": { 00:05:30.415 "enable_ktls": false, 00:05:30.415 "enable_placement_id": 0, 00:05:30.415 "enable_quickack": false, 00:05:30.415 "enable_recv_pipe": true, 00:05:30.415 "enable_zerocopy_send_client": false, 00:05:30.415 "enable_zerocopy_send_server": true, 00:05:30.415 "impl_name": "ssl", 00:05:30.415 "recv_buf_size": 4096, 00:05:30.415 "send_buf_size": 4096, 00:05:30.415 "tls_version": 0, 00:05:30.415 "zerocopy_threshold": 0 00:05:30.415 } 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "method": "sock_impl_set_options", 00:05:30.415 "params": { 00:05:30.415 "enable_ktls": false, 00:05:30.415 "enable_placement_id": 0, 00:05:30.415 "enable_quickack": false, 00:05:30.415 "enable_recv_pipe": true, 00:05:30.415 "enable_zerocopy_send_client": false, 00:05:30.415 "enable_zerocopy_send_server": true, 00:05:30.415 "impl_name": "posix", 00:05:30.415 "recv_buf_size": 2097152, 00:05:30.415 "send_buf_size": 2097152, 00:05:30.415 "tls_version": 0, 00:05:30.415 "zerocopy_threshold": 0 00:05:30.415 } 00:05:30.415 } 00:05:30.415 ] 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "vmd", 00:05:30.415 "config": [] 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "accel", 00:05:30.415 "config": [ 00:05:30.415 { 00:05:30.415 "method": "accel_set_options", 00:05:30.415 "params": { 00:05:30.415 "buf_count": 2048, 00:05:30.415 "large_cache_size": 16, 00:05:30.415 "sequence_count": 2048, 00:05:30.415 "small_cache_size": 128, 00:05:30.415 "task_count": 2048 00:05:30.415 } 00:05:30.415 } 00:05:30.415 ] 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "subsystem": "bdev", 00:05:30.415 "config": [ 00:05:30.415 { 00:05:30.415 "method": "bdev_set_options", 00:05:30.415 "params": { 00:05:30.415 "bdev_auto_examine": true, 00:05:30.415 "bdev_io_cache_size": 256, 00:05:30.415 "bdev_io_pool_size": 65535, 00:05:30.415 "iobuf_large_cache_size": 16, 00:05:30.415 "iobuf_small_cache_size": 128 00:05:30.415 } 00:05:30.415 }, 00:05:30.415 { 00:05:30.415 "method": "bdev_raid_set_options", 00:05:30.415 "params": { 00:05:30.415 "process_window_size_kb": 1024 00:05:30.415 } 00:05:30.415 }, 00:05:30.415 { 00:05:30.416 "method": "bdev_iscsi_set_options", 00:05:30.416 "params": { 00:05:30.416 "timeout_sec": 30 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "bdev_nvme_set_options", 00:05:30.416 "params": { 00:05:30.416 "action_on_timeout": "none", 00:05:30.416 "allow_accel_sequence": false, 00:05:30.416 "arbitration_burst": 0, 00:05:30.416 "bdev_retry_count": 3, 00:05:30.416 "ctrlr_loss_timeout_sec": 0, 00:05:30.416 "delay_cmd_submit": true, 00:05:30.416 "dhchap_dhgroups": [ 00:05:30.416 "null", 00:05:30.416 "ffdhe2048", 00:05:30.416 "ffdhe3072", 00:05:30.416 "ffdhe4096", 00:05:30.416 "ffdhe6144", 00:05:30.416 "ffdhe8192" 00:05:30.416 ], 00:05:30.416 "dhchap_digests": [ 00:05:30.416 "sha256", 00:05:30.416 "sha384", 00:05:30.416 "sha512" 00:05:30.416 ], 00:05:30.416 "disable_auto_failback": false, 00:05:30.416 "fast_io_fail_timeout_sec": 0, 00:05:30.416 "generate_uuids": false, 00:05:30.416 "high_priority_weight": 0, 00:05:30.416 "io_path_stat": false, 00:05:30.416 "io_queue_requests": 0, 00:05:30.416 "keep_alive_timeout_ms": 10000, 00:05:30.416 "low_priority_weight": 0, 00:05:30.416 "medium_priority_weight": 0, 00:05:30.416 "nvme_adminq_poll_period_us": 10000, 00:05:30.416 "nvme_error_stat": false, 00:05:30.416 "nvme_ioq_poll_period_us": 0, 00:05:30.416 "rdma_cm_event_timeout_ms": 0, 00:05:30.416 "rdma_max_cq_size": 0, 00:05:30.416 "rdma_srq_size": 0, 00:05:30.416 "reconnect_delay_sec": 0, 00:05:30.416 "timeout_admin_us": 0, 00:05:30.416 "timeout_us": 0, 00:05:30.416 "transport_ack_timeout": 0, 00:05:30.416 "transport_retry_count": 4, 00:05:30.416 "transport_tos": 0 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "bdev_nvme_set_hotplug", 00:05:30.416 "params": { 00:05:30.416 "enable": false, 00:05:30.416 "period_us": 100000 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "bdev_wait_for_examine" 00:05:30.416 } 00:05:30.416 ] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "scsi", 00:05:30.416 "config": null 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "scheduler", 00:05:30.416 "config": [ 00:05:30.416 { 00:05:30.416 "method": "framework_set_scheduler", 00:05:30.416 "params": { 00:05:30.416 "name": "static" 00:05:30.416 } 00:05:30.416 } 00:05:30.416 ] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "vhost_scsi", 00:05:30.416 "config": [] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "vhost_blk", 00:05:30.416 "config": [] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "ublk", 00:05:30.416 "config": [] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "nbd", 00:05:30.416 "config": [] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "nvmf", 00:05:30.416 "config": [ 00:05:30.416 { 00:05:30.416 "method": "nvmf_set_config", 00:05:30.416 "params": { 00:05:30.416 "admin_cmd_passthru": { 00:05:30.416 "identify_ctrlr": false 00:05:30.416 }, 00:05:30.416 "discovery_filter": "match_any" 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "nvmf_set_max_subsystems", 00:05:30.416 "params": { 00:05:30.416 "max_subsystems": 1024 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "nvmf_set_crdt", 00:05:30.416 "params": { 00:05:30.416 "crdt1": 0, 00:05:30.416 "crdt2": 0, 00:05:30.416 "crdt3": 0 00:05:30.416 } 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "method": "nvmf_create_transport", 00:05:30.416 "params": { 00:05:30.416 "abort_timeout_sec": 1, 00:05:30.416 "ack_timeout": 0, 00:05:30.416 "buf_cache_size": 4294967295, 00:05:30.416 "c2h_success": true, 00:05:30.416 "data_wr_pool_size": 0, 00:05:30.416 "dif_insert_or_strip": false, 00:05:30.416 "in_capsule_data_size": 4096, 00:05:30.416 "io_unit_size": 131072, 00:05:30.416 "max_aq_depth": 128, 00:05:30.416 "max_io_qpairs_per_ctrlr": 127, 00:05:30.416 "max_io_size": 131072, 00:05:30.416 "max_queue_depth": 128, 00:05:30.416 "num_shared_buffers": 511, 00:05:30.416 "sock_priority": 0, 00:05:30.416 "trtype": "TCP", 00:05:30.416 "zcopy": false 00:05:30.416 } 00:05:30.416 } 00:05:30.416 ] 00:05:30.416 }, 00:05:30.416 { 00:05:30.416 "subsystem": "iscsi", 00:05:30.416 "config": [ 00:05:30.416 { 00:05:30.416 "method": "iscsi_set_options", 00:05:30.416 "params": { 00:05:30.416 "allow_duplicated_isid": false, 00:05:30.416 "chap_group": 0, 00:05:30.416 "data_out_pool_size": 2048, 00:05:30.416 "default_time2retain": 20, 00:05:30.416 "default_time2wait": 2, 00:05:30.416 "disable_chap": false, 00:05:30.416 "error_recovery_level": 0, 00:05:30.416 "first_burst_length": 8192, 00:05:30.416 "immediate_data": true, 00:05:30.416 "immediate_data_pool_size": 16384, 00:05:30.416 "max_connections_per_session": 2, 00:05:30.416 "max_large_datain_per_connection": 64, 00:05:30.416 "max_queue_depth": 64, 00:05:30.416 "max_r2t_per_connection": 4, 00:05:30.416 "max_sessions": 128, 00:05:30.416 "mutual_chap": false, 00:05:30.416 "node_base": "iqn.2016-06.io.spdk", 00:05:30.416 "nop_in_interval": 30, 00:05:30.416 "nop_timeout": 60, 00:05:30.416 "pdu_pool_size": 36864, 00:05:30.416 "require_chap": false 00:05:30.416 } 00:05:30.416 } 00:05:30.416 ] 00:05:30.416 } 00:05:30.416 ] 00:05:30.416 } 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61808 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61808 ']' 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61808 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61808 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.416 killing process with pid 61808 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61808' 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61808 00:05:30.416 00:24:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61808 00:05:32.940 00:24:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61876 00:05:32.940 00:24:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.940 00:24:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61876 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61876 ']' 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61876 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61876 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.206 killing process with pid 61876 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61876' 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61876 00:05:38.206 00:24:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61876 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.109 00:05:40.109 real 0m11.384s 00:05:40.109 user 0m10.732s 00:05:40.109 sys 0m1.088s 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.109 ************************************ 00:05:40.109 END TEST skip_rpc_with_json 00:05:40.109 ************************************ 00:05:40.109 00:24:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.109 00:24:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:40.109 00:24:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.109 00:24:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.109 00:24:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.109 ************************************ 00:05:40.109 START TEST skip_rpc_with_delay 00:05:40.109 ************************************ 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.109 00:24:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.368 [2024-07-12 00:24:45.076179] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.369 [2024-07-12 00:24:45.076389] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.369 00:05:40.369 real 0m0.200s 00:05:40.369 user 0m0.113s 00:05:40.369 sys 0m0.085s 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.369 ************************************ 00:05:40.369 END TEST skip_rpc_with_delay 00:05:40.369 ************************************ 00:05:40.369 00:24:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.369 00:24:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.369 00:24:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.369 00:24:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.369 00:24:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.369 00:24:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.369 00:24:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.369 00:24:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.369 ************************************ 00:05:40.369 START TEST exit_on_failed_rpc_init 00:05:40.369 ************************************ 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62009 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62009 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62009 ']' 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.369 00:24:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.627 [2024-07-12 00:24:45.341420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:40.627 [2024-07-12 00:24:45.341631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62009 ] 00:05:40.627 [2024-07-12 00:24:45.518798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.193 [2024-07-12 00:24:45.830729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:41.758 00:24:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:42.015 [2024-07-12 00:24:46.813118] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.015 [2024-07-12 00:24:46.813989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:05:42.271 [2024-07-12 00:24:46.996243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.529 [2024-07-12 00:24:47.302203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.529 [2024-07-12 00:24:47.302352] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:42.529 [2024-07-12 00:24:47.302393] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:42.529 [2024-07-12 00:24:47.302448] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62009 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62009 ']' 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62009 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62009 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62009' 00:05:43.095 killing process with pid 62009 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62009 00:05:43.095 00:24:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62009 00:05:45.624 00:05:45.624 real 0m4.946s 00:05:45.624 user 0m5.654s 00:05:45.624 sys 0m0.768s 00:05:45.624 ************************************ 00:05:45.624 END TEST exit_on_failed_rpc_init 00:05:45.624 ************************************ 00:05:45.624 00:24:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.624 00:24:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.624 00:24:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.624 00:24:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.624 00:05:45.624 real 0m24.209s 00:05:45.624 user 0m23.364s 00:05:45.624 sys 0m2.629s 00:05:45.624 00:24:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.624 00:24:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.624 ************************************ 00:05:45.624 END TEST skip_rpc 00:05:45.624 ************************************ 00:05:45.624 00:24:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.624 00:24:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.624 00:24:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.624 00:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.624 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.624 ************************************ 00:05:45.624 START TEST rpc_client 00:05:45.624 ************************************ 00:05:45.624 00:24:50 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.624 * Looking for test storage... 00:05:45.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.624 00:24:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.624 OK 00:05:45.624 00:24:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.624 00:05:45.624 real 0m0.156s 00:05:45.624 user 0m0.077s 00:05:45.624 sys 0m0.086s 00:05:45.624 00:24:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.624 ************************************ 00:05:45.624 END TEST rpc_client 00:05:45.624 ************************************ 00:05:45.624 00:24:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.624 00:24:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.624 00:24:50 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.624 00:24:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.624 00:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.624 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.624 ************************************ 00:05:45.624 START TEST json_config 00:05:45.624 ************************************ 00:05:45.624 00:24:50 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.624 00:24:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.624 00:24:50 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.624 00:24:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.624 00:24:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.624 00:24:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.624 00:24:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.625 00:24:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.625 00:24:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.625 00:24:50 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.625 00:24:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@47 -- # : 0 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.625 00:24:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.625 INFO: JSON configuration test init 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:45.625 00:24:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.625 00:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.625 00:24:50 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:45.625 00:24:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.625 00:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.884 00:24:50 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.884 00:24:50 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.884 00:24:50 json_config -- json_config/common.sh@10 -- # shift 00:05:45.884 00:24:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.884 00:24:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.884 00:24:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.884 00:24:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.884 00:24:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.884 00:24:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=62193 00:05:45.884 Waiting for target to run... 00:05:45.884 00:24:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.884 00:24:50 json_config -- json_config/common.sh@25 -- # waitforlisten 62193 /var/tmp/spdk_tgt.sock 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 62193 ']' 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.884 00:24:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.884 00:24:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.884 [2024-07-12 00:24:50.705154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.884 [2024-07-12 00:24:50.705405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62193 ] 00:05:46.452 [2024-07-12 00:24:51.292270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.712 [2024-07-12 00:24:51.557171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.712 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:46.712 00:24:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.712 00:24:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:46.712 00:24:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.712 00:24:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:46.712 00:24:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.712 00:24:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.970 00:24:51 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.970 00:24:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:46.970 00:24:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:47.906 00:24:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.906 00:24:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:47.906 00:24:52 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:47.906 00:24:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:48.165 00:24:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.165 00:24:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:48.165 00:24:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.165 00:24:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:48.165 00:24:53 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.165 00:24:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.731 MallocForNvmf0 00:05:48.731 00:24:53 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.731 00:24:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.731 MallocForNvmf1 00:05:48.731 00:24:53 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.731 00:24:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.990 [2024-07-12 00:24:53.882606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.990 00:24:53 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.990 00:24:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.247 00:24:54 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.247 00:24:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.505 00:24:54 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.505 00:24:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.763 00:24:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.763 00:24:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.021 [2024-07-12 00:24:54.935347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.279 00:24:54 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:50.279 00:24:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.279 00:24:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.279 00:24:54 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:50.279 00:24:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.279 00:24:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.279 00:24:55 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:50.279 00:24:55 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.279 00:24:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.537 MallocBdevForConfigChangeCheck 00:05:50.537 00:24:55 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:50.537 00:24:55 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.537 00:24:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 00:24:55 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:50.537 00:24:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.104 INFO: shutting down applications... 00:05:51.104 00:24:55 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:51.104 00:24:55 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:51.104 00:24:55 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:51.104 00:24:55 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:51.104 00:24:55 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:51.362 Calling clear_iscsi_subsystem 00:05:51.362 Calling clear_nvmf_subsystem 00:05:51.362 Calling clear_nbd_subsystem 00:05:51.362 Calling clear_ublk_subsystem 00:05:51.362 Calling clear_vhost_blk_subsystem 00:05:51.362 Calling clear_vhost_scsi_subsystem 00:05:51.362 Calling clear_bdev_subsystem 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:51.362 00:24:56 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.928 00:24:56 json_config -- json_config/json_config.sh@345 -- # break 00:05:51.928 00:24:56 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:51.928 00:24:56 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:51.928 00:24:56 json_config -- json_config/common.sh@31 -- # local app=target 00:05:51.928 00:24:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.928 00:24:56 json_config -- json_config/common.sh@35 -- # [[ -n 62193 ]] 00:05:51.928 00:24:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 62193 00:05:51.928 00:24:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.928 00:24:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.928 00:24:56 json_config -- json_config/common.sh@41 -- # kill -0 62193 00:05:51.928 00:24:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.186 00:24:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.186 00:24:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.186 00:24:57 json_config -- json_config/common.sh@41 -- # kill -0 62193 00:05:52.186 00:24:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.753 00:24:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.753 00:24:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.753 00:24:57 json_config -- json_config/common.sh@41 -- # kill -0 62193 00:05:52.753 00:24:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.753 00:24:57 json_config -- json_config/common.sh@43 -- # break 00:05:52.753 00:24:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.753 SPDK target shutdown done 00:05:52.753 00:24:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.753 INFO: relaunching applications... 00:05:52.753 00:24:57 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:52.753 00:24:57 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.753 00:24:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:52.753 00:24:57 json_config -- json_config/common.sh@10 -- # shift 00:05:52.753 00:24:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.753 00:24:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.753 00:24:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.753 00:24:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.753 00:24:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.753 00:24:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=62486 00:05:52.753 Waiting for target to run... 00:05:52.753 00:24:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.753 00:24:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.753 00:24:57 json_config -- json_config/common.sh@25 -- # waitforlisten 62486 /var/tmp/spdk_tgt.sock 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 62486 ']' 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.753 00:24:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.012 [2024-07-12 00:24:57.741710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:53.012 [2024-07-12 00:24:57.741932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62486 ] 00:05:53.580 [2024-07-12 00:24:58.327467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.838 [2024-07-12 00:24:58.549438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.776 [2024-07-12 00:24:59.459659] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.776 [2024-07-12 00:24:59.491792] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.776 00:24:59 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.776 00:24:59 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:54.776 00:05:54.776 00:24:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.776 00:24:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:54.776 00:24:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.776 INFO: Checking if target configuration is the same... 00:05:54.776 00:24:59 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.776 00:24:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:54.776 00:24:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.776 + '[' 2 -ne 2 ']' 00:05:54.776 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:54.776 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:54.776 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:54.776 +++ basename /dev/fd/62 00:05:54.776 ++ mktemp /tmp/62.XXX 00:05:54.776 + tmp_file_1=/tmp/62.Lkt 00:05:54.776 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.776 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.776 + tmp_file_2=/tmp/spdk_tgt_config.json.Ux6 00:05:54.776 + ret=0 00:05:54.776 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.034 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.293 + diff -u /tmp/62.Lkt /tmp/spdk_tgt_config.json.Ux6 00:05:55.293 INFO: JSON config files are the same 00:05:55.293 + echo 'INFO: JSON config files are the same' 00:05:55.293 + rm /tmp/62.Lkt /tmp/spdk_tgt_config.json.Ux6 00:05:55.293 + exit 0 00:05:55.293 00:24:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:55.293 INFO: changing configuration and checking if this can be detected... 00:05:55.293 00:24:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.293 00:24:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.293 00:24:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.550 00:25:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:55.550 00:25:00 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.550 00:25:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.550 + '[' 2 -ne 2 ']' 00:05:55.550 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:55.550 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:55.550 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:55.550 +++ basename /dev/fd/62 00:05:55.550 ++ mktemp /tmp/62.XXX 00:05:55.550 + tmp_file_1=/tmp/62.aAN 00:05:55.550 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.550 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.550 + tmp_file_2=/tmp/spdk_tgt_config.json.Q35 00:05:55.550 + ret=0 00:05:55.550 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.808 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:56.065 + diff -u /tmp/62.aAN /tmp/spdk_tgt_config.json.Q35 00:05:56.065 + ret=1 00:05:56.065 + echo '=== Start of file: /tmp/62.aAN ===' 00:05:56.065 + cat /tmp/62.aAN 00:05:56.065 + echo '=== End of file: /tmp/62.aAN ===' 00:05:56.065 + echo '' 00:05:56.065 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q35 ===' 00:05:56.066 + cat /tmp/spdk_tgt_config.json.Q35 00:05:56.066 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q35 ===' 00:05:56.066 + echo '' 00:05:56.066 + rm /tmp/62.aAN /tmp/spdk_tgt_config.json.Q35 00:05:56.066 + exit 1 00:05:56.066 INFO: configuration change detected. 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 62486 ]] 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 00:25:00 json_config -- json_config/json_config.sh@323 -- # killprocess 62486 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@948 -- # '[' -z 62486 ']' 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@952 -- # kill -0 62486 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@953 -- # uname 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62486 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.066 killing process with pid 62486 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62486' 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@967 -- # kill 62486 00:05:56.066 00:25:00 json_config -- common/autotest_common.sh@972 -- # wait 62486 00:05:57.013 00:25:01 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.013 00:25:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:57.013 00:25:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.013 00:25:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.013 00:25:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:57.013 INFO: Success 00:05:57.013 00:25:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:57.013 00:05:57.013 real 0m11.476s 00:05:57.013 user 0m15.175s 00:05:57.013 sys 0m2.610s 00:05:57.013 00:25:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.013 00:25:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.013 ************************************ 00:05:57.013 END TEST json_config 00:05:57.013 ************************************ 00:05:57.272 00:25:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.272 00:25:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.272 00:25:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.272 00:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.272 00:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.272 ************************************ 00:05:57.272 START TEST json_config_extra_key 00:05:57.272 ************************************ 00:05:57.272 00:25:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.272 00:25:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.272 00:25:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.272 00:25:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.272 00:25:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.272 00:25:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.272 00:25:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.272 00:25:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.272 00:25:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.272 00:25:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.272 INFO: launching applications... 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.272 00:25:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62688 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.272 Waiting for target to run... 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62688 /var/tmp/spdk_tgt.sock 00:05:57.272 00:25:02 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62688 ']' 00:05:57.272 00:25:02 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.272 00:25:02 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.272 00:25:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.272 00:25:02 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.272 00:25:02 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.273 00:25:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.530 [2024-07-12 00:25:02.216648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.530 [2024-07-12 00:25:02.217486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62688 ] 00:05:58.096 [2024-07-12 00:25:02.788834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.353 [2024-07-12 00:25:03.057899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.918 00:25:03 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.918 00:05:58.918 00:25:03 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.918 INFO: shutting down applications... 00:05:58.918 00:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.918 00:25:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62688 ]] 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62688 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:05:58.918 00:25:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.485 00:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.485 00:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.485 00:25:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:05:59.485 00:25:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.053 00:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.053 00:25:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.053 00:25:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:06:00.053 00:25:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.312 00:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.312 00:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.312 00:25:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:06:00.312 00:25:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.878 00:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.878 00:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.878 00:25:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:06:00.878 00:25:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.446 00:25:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.446 00:25:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.446 00:25:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:06:01.446 00:25:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.011 00:25:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62688 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.012 00:25:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.012 SPDK target shutdown done 00:06:02.012 Success 00:06:02.012 00:25:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.012 00:06:02.012 real 0m4.750s 00:06:02.012 user 0m4.261s 00:06:02.012 sys 0m0.762s 00:06:02.012 00:25:06 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.012 00:25:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.012 ************************************ 00:06:02.012 END TEST json_config_extra_key 00:06:02.012 ************************************ 00:06:02.012 00:25:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.012 00:25:06 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.012 00:25:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.012 00:25:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.012 00:25:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.012 ************************************ 00:06:02.012 START TEST alias_rpc 00:06:02.012 ************************************ 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.012 * Looking for test storage... 00:06:02.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:02.012 00:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.012 00:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62798 00:06:02.012 00:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62798 00:06:02.012 00:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62798 ']' 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.012 00:25:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.270 [2024-07-12 00:25:07.008278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:02.270 [2024-07-12 00:25:07.008506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:06:02.270 [2024-07-12 00:25:07.186690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.834 [2024-07-12 00:25:07.493363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.764 00:25:08 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.764 00:25:08 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.764 00:25:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:04.022 00:25:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62798 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62798 ']' 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62798 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62798 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62798' 00:06:04.022 killing process with pid 62798 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@967 -- # kill 62798 00:06:04.022 00:25:08 alias_rpc -- common/autotest_common.sh@972 -- # wait 62798 00:06:06.548 00:06:06.548 real 0m4.433s 00:06:06.548 user 0m4.480s 00:06:06.548 sys 0m0.718s 00:06:06.548 00:25:11 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.548 ************************************ 00:06:06.548 END TEST alias_rpc 00:06:06.548 ************************************ 00:06:06.548 00:25:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.548 00:25:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.548 00:25:11 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:06:06.548 00:25:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.548 00:25:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.548 00:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.548 00:25:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.548 ************************************ 00:06:06.548 START TEST dpdk_mem_utility 00:06:06.548 ************************************ 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.548 * Looking for test storage... 00:06:06.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:06.548 00:25:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:06.548 00:25:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62924 00:06:06.548 00:25:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.548 00:25:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62924 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62924 ']' 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.548 00:25:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.806 [2024-07-12 00:25:11.510623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:06.806 [2024-07-12 00:25:11.510830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62924 ] 00:06:06.806 [2024-07-12 00:25:11.690716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.064 [2024-07-12 00:25:11.997020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.005 00:25:12 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.005 00:25:12 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:08.005 00:25:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.005 00:25:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.005 00:25:12 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.005 00:25:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.005 { 00:06:08.005 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.005 } 00:06:08.005 00:25:12 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.005 00:25:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.275 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:08.275 1 heaps totaling size 820.000000 MiB 00:06:08.275 size: 820.000000 MiB heap id: 0 00:06:08.275 end heaps---------- 00:06:08.275 8 mempools totaling size 598.116089 MiB 00:06:08.275 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.275 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.275 size: 84.521057 MiB name: bdev_io_62924 00:06:08.275 size: 51.011292 MiB name: evtpool_62924 00:06:08.275 size: 50.003479 MiB name: msgpool_62924 00:06:08.275 size: 21.763794 MiB name: PDU_Pool 00:06:08.275 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.276 size: 0.026123 MiB name: Session_Pool 00:06:08.276 end mempools------- 00:06:08.276 6 memzones totaling size 4.142822 MiB 00:06:08.276 size: 1.000366 MiB name: RG_ring_0_62924 00:06:08.276 size: 1.000366 MiB name: RG_ring_1_62924 00:06:08.276 size: 1.000366 MiB name: RG_ring_4_62924 00:06:08.276 size: 1.000366 MiB name: RG_ring_5_62924 00:06:08.276 size: 0.125366 MiB name: RG_ring_2_62924 00:06:08.276 size: 0.015991 MiB name: RG_ring_3_62924 00:06:08.276 end memzones------- 00:06:08.276 00:25:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.276 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:06:08.276 list of free elements. size: 18.469482 MiB 00:06:08.276 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:08.276 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:08.276 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:08.276 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:08.276 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:08.276 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:08.276 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:08.276 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:08.276 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:08.276 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:08.276 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:08.276 element at address: 0x200000200000 with size: 0.834351 MiB 00:06:08.276 element at address: 0x20001b000000 with size: 0.568542 MiB 00:06:08.276 element at address: 0x200019200000 with size: 0.488708 MiB 00:06:08.276 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:08.276 element at address: 0x200013800000 with size: 0.468872 MiB 00:06:08.276 element at address: 0x200028400000 with size: 0.392639 MiB 00:06:08.276 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:08.276 list of standard malloc elements. size: 199.266113 MiB 00:06:08.276 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:08.276 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:08.276 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:08.276 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:08.276 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:08.276 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:08.276 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:08.276 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:08.276 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:06:08.276 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:06:08.276 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:08.276 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:08.276 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:08.276 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:08.277 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:08.277 element at address: 0x200028464840 with size: 0.000244 MiB 00:06:08.277 element at address: 0x200028464940 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846b600 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:08.277 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:08.278 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:08.278 list of memzone associated elements. size: 602.264404 MiB 00:06:08.278 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:08.278 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.278 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:08.278 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.278 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:08.278 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62924_0 00:06:08.278 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:08.278 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62924_0 00:06:08.278 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:08.278 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62924_0 00:06:08.278 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:08.278 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.278 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:08.278 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.278 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:08.278 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62924 00:06:08.278 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:08.278 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62924 00:06:08.278 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:08.278 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62924 00:06:08.278 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:08.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.278 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:08.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.278 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:08.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.278 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:08.278 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.278 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:08.278 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62924 00:06:08.278 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:08.278 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62924 00:06:08.278 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:08.278 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62924 00:06:08.278 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:08.278 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62924 00:06:08.278 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:08.278 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62924 00:06:08.278 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:08.278 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.278 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:08.278 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.278 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:08.278 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.278 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:08.278 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62924 00:06:08.278 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:08.278 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.278 element at address: 0x200028464a40 with size: 0.023804 MiB 00:06:08.278 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.278 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:08.278 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62924 00:06:08.278 element at address: 0x20002846abc0 with size: 0.002502 MiB 00:06:08.278 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.278 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:06:08.278 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62924 00:06:08.278 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:08.278 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62924 00:06:08.278 element at address: 0x20002846b700 with size: 0.000366 MiB 00:06:08.278 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.278 00:25:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.278 00:25:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62924 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62924 ']' 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62924 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62924 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.278 killing process with pid 62924 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62924' 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62924 00:06:08.278 00:25:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62924 00:06:10.810 00:06:10.810 real 0m4.265s 00:06:10.810 user 0m4.115s 00:06:10.810 sys 0m0.717s 00:06:10.810 00:25:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.810 ************************************ 00:06:10.810 END TEST dpdk_mem_utility 00:06:10.810 ************************************ 00:06:10.810 00:25:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.810 00:25:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.810 00:25:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.810 00:25:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.810 00:25:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.810 00:25:15 -- common/autotest_common.sh@10 -- # set +x 00:06:10.810 ************************************ 00:06:10.810 START TEST event 00:06:10.810 ************************************ 00:06:10.810 00:25:15 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.810 * Looking for test storage... 00:06:10.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.810 00:25:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.810 00:25:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.810 00:25:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.810 00:25:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:10.810 00:25:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.810 00:25:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.810 ************************************ 00:06:10.810 START TEST event_perf 00:06:10.810 ************************************ 00:06:10.810 00:25:15 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.810 Running I/O for 1 seconds...[2024-07-12 00:25:15.742445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.810 [2024-07-12 00:25:15.742633] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63042 ] 00:06:11.082 [2024-07-12 00:25:15.917917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.349 [2024-07-12 00:25:16.169278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.349 [2024-07-12 00:25:16.169345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.349 [2024-07-12 00:25:16.169427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.349 [2024-07-12 00:25:16.169436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.724 Running I/O for 1 seconds... 00:06:12.724 lcore 0: 170883 00:06:12.724 lcore 1: 170882 00:06:12.724 lcore 2: 170881 00:06:12.724 lcore 3: 170881 00:06:12.724 done. 00:06:12.724 00:06:12.724 real 0m1.906s 00:06:12.724 user 0m4.616s 00:06:12.724 sys 0m0.154s 00:06:12.724 00:25:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.724 00:25:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.724 ************************************ 00:06:12.724 END TEST event_perf 00:06:12.724 ************************************ 00:06:12.724 00:25:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.724 00:25:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:12.724 00:25:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:12.724 00:25:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.724 00:25:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.724 ************************************ 00:06:12.724 START TEST event_reactor 00:06:12.724 ************************************ 00:06:12.724 00:25:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:12.983 [2024-07-12 00:25:17.690582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:12.983 [2024-07-12 00:25:17.690743] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63087 ] 00:06:12.983 [2024-07-12 00:25:17.850592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.241 [2024-07-12 00:25:18.099851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.612 test_start 00:06:14.612 oneshot 00:06:14.612 tick 100 00:06:14.612 tick 100 00:06:14.612 tick 250 00:06:14.612 tick 100 00:06:14.612 tick 100 00:06:14.612 tick 100 00:06:14.612 tick 250 00:06:14.612 tick 500 00:06:14.612 tick 100 00:06:14.612 tick 100 00:06:14.612 tick 250 00:06:14.612 tick 100 00:06:14.612 tick 100 00:06:14.612 test_end 00:06:14.612 00:06:14.612 real 0m1.858s 00:06:14.612 user 0m1.638s 00:06:14.612 sys 0m0.109s 00:06:14.612 00:25:19 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.612 00:25:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:14.612 ************************************ 00:06:14.612 END TEST event_reactor 00:06:14.612 ************************************ 00:06:14.612 00:25:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.612 00:25:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.612 00:25:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:14.612 00:25:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.612 00:25:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.870 ************************************ 00:06:14.870 START TEST event_reactor_perf 00:06:14.870 ************************************ 00:06:14.870 00:25:19 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.870 [2024-07-12 00:25:19.585519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.870 [2024-07-12 00:25:19.585698] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63129 ] 00:06:14.870 [2024-07-12 00:25:19.748078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.435 [2024-07-12 00:25:20.088494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.808 test_start 00:06:16.808 test_end 00:06:16.808 Performance: 270483 events per second 00:06:16.808 00:06:16.808 real 0m1.955s 00:06:16.808 user 0m1.739s 00:06:16.808 sys 0m0.103s 00:06:16.808 00:25:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.808 00:25:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.808 ************************************ 00:06:16.808 END TEST event_reactor_perf 00:06:16.808 ************************************ 00:06:16.808 00:25:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.808 00:25:21 event -- event/event.sh@49 -- # uname -s 00:06:16.808 00:25:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.808 00:25:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.808 00:25:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.808 00:25:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.808 00:25:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.808 ************************************ 00:06:16.808 START TEST event_scheduler 00:06:16.808 ************************************ 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.808 * Looking for test storage... 00:06:16.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:16.808 00:25:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.808 00:25:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63196 00:06:16.808 00:25:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.808 00:25:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.808 00:25:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63196 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63196 ']' 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.808 00:25:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.066 [2024-07-12 00:25:21.765728] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.066 [2024-07-12 00:25:21.766013] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63196 ] 00:06:17.066 [2024-07-12 00:25:21.967565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.323 [2024-07-12 00:25:22.229946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.323 [2024-07-12 00:25:22.230125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.323 [2024-07-12 00:25:22.230200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.323 [2024-07-12 00:25:22.230495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:17.888 00:25:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.888 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.888 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.888 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.888 POWER: Cannot set governor of lcore 0 to performance 00:06:17.888 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.888 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.888 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.888 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.888 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:17.888 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:17.888 POWER: Unable to set Power Management Environment for lcore 0 00:06:17.888 [2024-07-12 00:25:22.796317] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:17.888 [2024-07-12 00:25:22.796342] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:17.888 [2024-07-12 00:25:22.796363] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.888 [2024-07-12 00:25:22.796428] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.888 [2024-07-12 00:25:22.796450] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.888 [2024-07-12 00:25:22.796462] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.888 00:25:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.888 00:25:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 [2024-07-12 00:25:23.137791] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.454 00:25:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.454 00:25:23 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.454 00:25:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 ************************************ 00:06:18.454 START TEST scheduler_create_thread 00:06:18.454 ************************************ 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 2 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 3 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 4 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 5 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 6 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 7 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 8 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 9 00:06:18.454 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.455 10 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.455 00:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.387 00:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.387 00:25:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.387 00:25:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.387 00:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.387 00:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.418 00:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.419 00:06:20.419 real 0m2.137s 00:06:20.419 user 0m0.016s 00:06:20.419 sys 0m0.003s 00:06:20.419 00:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.419 00:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.419 ************************************ 00:06:20.419 END TEST scheduler_create_thread 00:06:20.419 ************************************ 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:20.419 00:25:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.419 00:25:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63196 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63196 ']' 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63196 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63196 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:20.419 killing process with pid 63196 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63196' 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63196 00:06:20.419 00:25:25 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63196 00:06:20.986 [2024-07-12 00:25:25.763805] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.360 00:06:22.360 real 0m5.665s 00:06:22.360 user 0m9.361s 00:06:22.360 sys 0m0.521s 00:06:22.360 00:25:27 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.360 00:25:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.360 ************************************ 00:06:22.360 END TEST event_scheduler 00:06:22.360 ************************************ 00:06:22.360 00:25:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.360 00:25:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.360 00:25:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.360 00:25:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.360 00:25:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.360 00:25:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.360 ************************************ 00:06:22.360 START TEST app_repeat 00:06:22.360 ************************************ 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63321 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.360 Process app_repeat pid: 63321 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63321' 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.360 spdk_app_start Round 0 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.360 00:25:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63321 /var/tmp/spdk-nbd.sock 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63321 ']' 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.360 00:25:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.619 [2024-07-12 00:25:27.333546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.619 [2024-07-12 00:25:27.333822] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63321 ] 00:06:22.619 [2024-07-12 00:25:27.515787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.878 [2024-07-12 00:25:27.764597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.878 [2024-07-12 00:25:27.764604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.444 00:25:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.444 00:25:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.444 00:25:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.702 Malloc0 00:06:23.702 00:25:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.961 Malloc1 00:06:24.220 00:25:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.220 00:25:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.220 /dev/nbd0 00:06:24.478 00:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.478 00:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.478 1+0 records in 00:06:24.478 1+0 records out 00:06:24.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296912 s, 13.8 MB/s 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.478 00:25:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.478 00:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.478 00:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.478 00:25:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.737 /dev/nbd1 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.737 1+0 records in 00:06:24.737 1+0 records out 00:06:24.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333248 s, 12.3 MB/s 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.737 00:25:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.737 00:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.997 { 00:06:24.997 "bdev_name": "Malloc0", 00:06:24.997 "nbd_device": "/dev/nbd0" 00:06:24.997 }, 00:06:24.997 { 00:06:24.997 "bdev_name": "Malloc1", 00:06:24.997 "nbd_device": "/dev/nbd1" 00:06:24.997 } 00:06:24.997 ]' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.997 { 00:06:24.997 "bdev_name": "Malloc0", 00:06:24.997 "nbd_device": "/dev/nbd0" 00:06:24.997 }, 00:06:24.997 { 00:06:24.997 "bdev_name": "Malloc1", 00:06:24.997 "nbd_device": "/dev/nbd1" 00:06:24.997 } 00:06:24.997 ]' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.997 /dev/nbd1' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.997 /dev/nbd1' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.997 256+0 records in 00:06:24.997 256+0 records out 00:06:24.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748519 s, 140 MB/s 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.997 256+0 records in 00:06:24.997 256+0 records out 00:06:24.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276861 s, 37.9 MB/s 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.997 256+0 records in 00:06:24.997 256+0 records out 00:06:24.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337661 s, 31.1 MB/s 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.997 00:25:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.256 00:25:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.829 00:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.086 00:25:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.086 00:25:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.652 00:25:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.048 [2024-07-12 00:25:32.540778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.048 [2024-07-12 00:25:32.777821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.048 [2024-07-12 00:25:32.777836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.332 [2024-07-12 00:25:32.968576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.332 [2024-07-12 00:25:32.968706] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.705 00:25:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.705 spdk_app_start Round 1 00:06:29.705 00:25:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:29.705 00:25:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63321 /var/tmp/spdk-nbd.sock 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63321 ']' 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.705 00:25:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.705 00:25:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.963 Malloc0 00:06:30.222 00:25:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.481 Malloc1 00:06:30.481 00:25:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.481 00:25:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.741 /dev/nbd0 00:06:30.741 00:25:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.741 00:25:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.741 1+0 records in 00:06:30.741 1+0 records out 00:06:30.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032138 s, 12.7 MB/s 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.741 00:25:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.741 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.741 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.741 00:25:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.999 /dev/nbd1 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.999 1+0 records in 00:06:30.999 1+0 records out 00:06:30.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356738 s, 11.5 MB/s 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.999 00:25:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.999 00:25:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.258 { 00:06:31.258 "bdev_name": "Malloc0", 00:06:31.258 "nbd_device": "/dev/nbd0" 00:06:31.258 }, 00:06:31.258 { 00:06:31.258 "bdev_name": "Malloc1", 00:06:31.258 "nbd_device": "/dev/nbd1" 00:06:31.258 } 00:06:31.258 ]' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.258 { 00:06:31.258 "bdev_name": "Malloc0", 00:06:31.258 "nbd_device": "/dev/nbd0" 00:06:31.258 }, 00:06:31.258 { 00:06:31.258 "bdev_name": "Malloc1", 00:06:31.258 "nbd_device": "/dev/nbd1" 00:06:31.258 } 00:06:31.258 ]' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.258 /dev/nbd1' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.258 /dev/nbd1' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.258 256+0 records in 00:06:31.258 256+0 records out 00:06:31.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010374 s, 101 MB/s 00:06:31.258 00:25:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.259 00:25:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.259 256+0 records in 00:06:31.259 256+0 records out 00:06:31.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280618 s, 37.4 MB/s 00:06:31.259 00:25:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.259 00:25:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.517 256+0 records in 00:06:31.517 256+0 records out 00:06:31.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355862 s, 29.5 MB/s 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.517 00:25:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.795 00:25:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.053 00:25:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.311 00:25:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.311 00:25:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.876 00:25:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.276 [2024-07-12 00:25:38.845971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.276 [2024-07-12 00:25:39.088911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.276 [2024-07-12 00:25:39.088916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.533 [2024-07-12 00:25:39.283950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.533 [2024-07-12 00:25:39.284088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.907 00:25:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.907 spdk_app_start Round 2 00:06:35.907 00:25:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:35.907 00:25:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63321 /var/tmp/spdk-nbd.sock 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63321 ']' 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.907 00:25:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.164 00:25:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.164 00:25:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.164 00:25:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.421 Malloc0 00:06:36.421 00:25:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.679 Malloc1 00:06:36.937 00:25:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.937 00:25:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.937 /dev/nbd0 00:06:37.195 00:25:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.195 00:25:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.195 1+0 records in 00:06:37.195 1+0 records out 00:06:37.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271241 s, 15.1 MB/s 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.195 00:25:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.195 00:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.195 00:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.195 00:25:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.195 /dev/nbd1 00:06:37.453 00:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.454 1+0 records in 00:06:37.454 1+0 records out 00:06:37.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314942 s, 13.0 MB/s 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.454 00:25:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.454 00:25:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.712 { 00:06:37.712 "bdev_name": "Malloc0", 00:06:37.712 "nbd_device": "/dev/nbd0" 00:06:37.712 }, 00:06:37.712 { 00:06:37.712 "bdev_name": "Malloc1", 00:06:37.712 "nbd_device": "/dev/nbd1" 00:06:37.712 } 00:06:37.712 ]' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.712 { 00:06:37.712 "bdev_name": "Malloc0", 00:06:37.712 "nbd_device": "/dev/nbd0" 00:06:37.712 }, 00:06:37.712 { 00:06:37.712 "bdev_name": "Malloc1", 00:06:37.712 "nbd_device": "/dev/nbd1" 00:06:37.712 } 00:06:37.712 ]' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.712 /dev/nbd1' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.712 /dev/nbd1' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.712 256+0 records in 00:06:37.712 256+0 records out 00:06:37.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00831485 s, 126 MB/s 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.712 256+0 records in 00:06:37.712 256+0 records out 00:06:37.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310684 s, 33.8 MB/s 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.712 256+0 records in 00:06:37.712 256+0 records out 00:06:37.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342395 s, 30.6 MB/s 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.712 00:25:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.282 00:25:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.540 00:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.809 00:25:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.809 00:25:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.067 00:25:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.439 [2024-07-12 00:25:45.239855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.698 [2024-07-12 00:25:45.480360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.698 [2024-07-12 00:25:45.480373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.955 [2024-07-12 00:25:45.673477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.955 [2024-07-12 00:25:45.673547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.333 00:25:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63321 /var/tmp/spdk-nbd.sock 00:06:42.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63321 ']' 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.333 00:25:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:42.591 00:25:47 event.app_repeat -- event/event.sh@39 -- # killprocess 63321 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63321 ']' 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63321 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63321 00:06:42.591 killing process with pid 63321 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63321' 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63321 00:06:42.591 00:25:47 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63321 00:06:43.966 spdk_app_start is called in Round 0. 00:06:43.966 Shutdown signal received, stop current app iteration 00:06:43.966 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:43.966 spdk_app_start is called in Round 1. 00:06:43.966 Shutdown signal received, stop current app iteration 00:06:43.966 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:43.966 spdk_app_start is called in Round 2. 00:06:43.966 Shutdown signal received, stop current app iteration 00:06:43.966 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:43.966 spdk_app_start is called in Round 3. 00:06:43.966 Shutdown signal received, stop current app iteration 00:06:43.966 00:25:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.966 ************************************ 00:06:43.966 END TEST app_repeat 00:06:43.966 ************************************ 00:06:43.966 00:25:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.966 00:06:43.966 real 0m21.254s 00:06:43.966 user 0m45.332s 00:06:43.966 sys 0m3.310s 00:06:43.966 00:25:48 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.966 00:25:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.966 00:25:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.966 00:25:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.966 00:25:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.966 00:25:48 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.966 00:25:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.966 00:25:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.966 ************************************ 00:06:43.966 START TEST cpu_locks 00:06:43.966 ************************************ 00:06:43.966 00:25:48 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.966 * Looking for test storage... 00:06:43.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.966 00:25:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.966 00:25:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.966 00:25:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.966 00:25:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.966 00:25:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.966 00:25:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.966 00:25:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.966 ************************************ 00:06:43.966 START TEST default_locks 00:06:43.966 ************************************ 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63974 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63974 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63974 ']' 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.966 00:25:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.966 [2024-07-12 00:25:48.829345] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:43.966 [2024-07-12 00:25:48.829535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63974 ] 00:06:44.225 [2024-07-12 00:25:48.996841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.484 [2024-07-12 00:25:49.250762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.419 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.419 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:45.419 00:25:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63974 00:06:45.419 00:25:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63974 00:06:45.419 00:25:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63974 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63974 ']' 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63974 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63974 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.678 killing process with pid 63974 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63974' 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63974 00:06:45.678 00:25:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63974 00:06:48.207 00:25:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63974 00:06:48.207 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63974 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63974 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63974 ']' 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.208 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63974) - No such process 00:06:48.208 ERROR: process (pid: 63974) is no longer running 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.208 00:25:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.208 00:25:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.208 00:06:48.208 real 0m4.335s 00:06:48.208 user 0m4.319s 00:06:48.208 sys 0m0.739s 00:06:48.208 00:25:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.208 00:25:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.208 ************************************ 00:06:48.208 END TEST default_locks 00:06:48.208 ************************************ 00:06:48.208 00:25:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.208 00:25:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.208 00:25:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.208 00:25:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.208 00:25:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.208 ************************************ 00:06:48.208 START TEST default_locks_via_rpc 00:06:48.208 ************************************ 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64067 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64067 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64067 ']' 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.208 00:25:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.465 [2024-07-12 00:25:53.191160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:48.465 [2024-07-12 00:25:53.191385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64067 ] 00:06:48.465 [2024-07-12 00:25:53.365007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.722 [2024-07-12 00:25:53.609038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64067 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64067 00:06:49.657 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.223 00:25:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64067 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64067 ']' 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64067 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64067 00:06:50.224 killing process with pid 64067 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64067' 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64067 00:06:50.224 00:25:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64067 00:06:52.770 00:06:52.770 real 0m4.073s 00:06:52.770 user 0m4.065s 00:06:52.770 sys 0m0.737s 00:06:52.770 00:25:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.770 ************************************ 00:06:52.770 END TEST default_locks_via_rpc 00:06:52.770 ************************************ 00:06:52.770 00:25:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.770 00:25:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.770 00:25:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:52.770 00:25:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.770 00:25:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.770 00:25:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.770 ************************************ 00:06:52.770 START TEST non_locking_app_on_locked_coremask 00:06:52.770 ************************************ 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:52.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64159 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64159 /var/tmp/spdk.sock 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64159 ']' 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.770 00:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.770 [2024-07-12 00:25:57.335005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.770 [2024-07-12 00:25:57.335249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64159 ] 00:06:52.770 [2024-07-12 00:25:57.522431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.030 [2024-07-12 00:25:57.770805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64192 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64192 /var/tmp/spdk2.sock 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64192 ']' 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.966 00:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.966 [2024-07-12 00:25:58.740355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.966 [2024-07-12 00:25:58.740798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64192 ] 00:06:54.224 [2024-07-12 00:25:58.912833] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.225 [2024-07-12 00:25:58.912931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.483 [2024-07-12 00:25:59.409310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.412 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.412 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.412 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64159 00:06:56.412 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64159 00:06:56.412 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64159 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64159 ']' 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64159 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64159 00:06:56.978 killing process with pid 64159 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64159' 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64159 00:06:56.978 00:26:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64159 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64192 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64192 ']' 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64192 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64192 00:07:02.241 killing process with pid 64192 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64192' 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64192 00:07:02.241 00:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64192 00:07:03.615 00:07:03.615 real 0m11.371s 00:07:03.615 user 0m11.702s 00:07:03.615 sys 0m1.384s 00:07:03.615 00:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.615 00:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.615 ************************************ 00:07:03.615 END TEST non_locking_app_on_locked_coremask 00:07:03.615 ************************************ 00:07:03.879 00:26:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.879 00:26:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:03.879 00:26:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.879 00:26:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.879 00:26:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.879 ************************************ 00:07:03.879 START TEST locking_app_on_unlocked_coremask 00:07:03.879 ************************************ 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64346 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64346 /var/tmp/spdk.sock 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64346 ']' 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.879 00:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.879 [2024-07-12 00:26:08.705940] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.879 [2024-07-12 00:26:08.706113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64346 ] 00:07:04.136 [2024-07-12 00:26:08.871362] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.136 [2024-07-12 00:26:08.871483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.393 [2024-07-12 00:26:09.121605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64374 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64374 /var/tmp/spdk2.sock 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64374 ']' 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.384 00:26:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.384 [2024-07-12 00:26:10.062914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.384 [2024-07-12 00:26:10.063089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64374 ] 00:07:05.384 [2024-07-12 00:26:10.237359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.953 [2024-07-12 00:26:10.747639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.913 00:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.913 00:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.913 00:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64374 00:07:07.913 00:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64374 00:07:07.913 00:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64346 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64346 ']' 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64346 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64346 00:07:08.478 killing process with pid 64346 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64346' 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64346 00:07:08.478 00:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64346 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64374 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64374 ']' 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64374 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64374 00:07:13.743 killing process with pid 64374 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64374' 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64374 00:07:13.743 00:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64374 00:07:15.642 ************************************ 00:07:15.642 END TEST locking_app_on_unlocked_coremask 00:07:15.642 ************************************ 00:07:15.642 00:07:15.642 real 0m11.723s 00:07:15.642 user 0m12.139s 00:07:15.642 sys 0m1.469s 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 00:26:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.642 00:26:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:15.642 00:26:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.642 00:26:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.642 00:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 START TEST locking_app_on_locked_coremask 00:07:15.642 ************************************ 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64533 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64533 /var/tmp/spdk.sock 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64533 ']' 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.642 00:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 [2024-07-12 00:26:20.515374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.642 [2024-07-12 00:26:20.515652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64533 ] 00:07:15.901 [2024-07-12 00:26:20.703659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.159 [2024-07-12 00:26:20.951073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64572 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64572 /var/tmp/spdk2.sock 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64572 /var/tmp/spdk2.sock 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64572 /var/tmp/spdk2.sock 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64572 ']' 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.095 00:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.095 [2024-07-12 00:26:21.922534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.095 [2024-07-12 00:26:21.922984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64572 ] 00:07:17.354 [2024-07-12 00:26:22.099027] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64533 has claimed it. 00:07:17.354 [2024-07-12 00:26:22.099146] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.921 ERROR: process (pid: 64572) is no longer running 00:07:17.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64572) - No such process 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64533 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64533 00:07:17.921 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.178 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64533 00:07:18.179 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64533 ']' 00:07:18.179 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64533 00:07:18.179 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:18.179 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.179 00:26:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64533 00:07:18.179 killing process with pid 64533 00:07:18.179 00:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.179 00:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.179 00:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64533' 00:07:18.179 00:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64533 00:07:18.179 00:26:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64533 00:07:20.705 ************************************ 00:07:20.705 END TEST locking_app_on_locked_coremask 00:07:20.705 ************************************ 00:07:20.705 00:07:20.705 real 0m4.970s 00:07:20.705 user 0m5.168s 00:07:20.705 sys 0m0.891s 00:07:20.705 00:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.705 00:26:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.705 00:26:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:20.705 00:26:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.705 00:26:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.705 00:26:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.705 00:26:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.705 ************************************ 00:07:20.705 START TEST locking_overlapped_coremask 00:07:20.705 ************************************ 00:07:20.705 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:20.705 00:26:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64642 00:07:20.705 00:26:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64642 /var/tmp/spdk.sock 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64642 ']' 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.706 00:26:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.706 [2024-07-12 00:26:25.508586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:20.706 [2024-07-12 00:26:25.509016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64642 ] 00:07:20.963 [2024-07-12 00:26:25.677184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.221 [2024-07-12 00:26:25.936121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.221 [2024-07-12 00:26:25.936234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.221 [2024-07-12 00:26:25.936245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64683 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64683 /var/tmp/spdk2.sock 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64683 /var/tmp/spdk2.sock 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:22.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64683 /var/tmp/spdk2.sock 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64683 ']' 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.158 00:26:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-07-12 00:26:27.099978] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.416 [2024-07-12 00:26:27.100239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64683 ] 00:07:22.416 [2024-07-12 00:26:27.297272] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64642 has claimed it. 00:07:22.416 [2024-07-12 00:26:27.297543] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.984 ERROR: process (pid: 64683) is no longer running 00:07:22.984 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64683) - No such process 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64642 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64642 ']' 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64642 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64642 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64642' 00:07:22.984 killing process with pid 64642 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64642 00:07:22.984 00:26:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64642 00:07:25.514 00:07:25.514 real 0m4.660s 00:07:25.514 user 0m12.144s 00:07:25.514 sys 0m0.831s 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.515 ************************************ 00:07:25.515 END TEST locking_overlapped_coremask 00:07:25.515 ************************************ 00:07:25.515 00:26:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:25.515 00:26:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:25.515 00:26:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.515 00:26:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.515 00:26:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.515 ************************************ 00:07:25.515 START TEST locking_overlapped_coremask_via_rpc 00:07:25.515 ************************************ 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64753 00:07:25.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64753 /var/tmp/spdk.sock 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64753 ']' 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.515 00:26:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.515 [2024-07-12 00:26:30.210160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.515 [2024-07-12 00:26:30.210327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64753 ] 00:07:25.515 [2024-07-12 00:26:30.376084] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.515 [2024-07-12 00:26:30.376180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.773 [2024-07-12 00:26:30.625688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.773 [2024-07-12 00:26:30.625817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.773 [2024-07-12 00:26:30.625833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64783 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64783 /var/tmp/spdk2.sock 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64783 ']' 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.706 00:26:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.706 [2024-07-12 00:26:31.567164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:26.706 [2024-07-12 00:26:31.567619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64783 ] 00:07:26.964 [2024-07-12 00:26:31.739514] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.964 [2024-07-12 00:26:31.739585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.532 [2024-07-12 00:26:32.227819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.532 [2024-07-12 00:26:32.231476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.532 [2024-07-12 00:26:32.231489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.906 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.906 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.907 [2024-07-12 00:26:33.815623] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64753 has claimed it. 00:07:28.907 2024/07/12 00:26:33 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:28.907 request: 00:07:28.907 { 00:07:28.907 "method": "framework_enable_cpumask_locks", 00:07:28.907 "params": {} 00:07:28.907 } 00:07:28.907 Got JSON-RPC error response 00:07:28.907 GoRPCClient: error on JSON-RPC call 00:07:28.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64753 /var/tmp/spdk.sock 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64753 ']' 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.907 00:26:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64783 /var/tmp/spdk2.sock 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64783 ']' 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.476 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.734 ************************************ 00:07:29.734 END TEST locking_overlapped_coremask_via_rpc 00:07:29.734 ************************************ 00:07:29.734 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.735 00:07:29.735 real 0m4.325s 00:07:29.735 user 0m1.416s 00:07:29.735 sys 0m0.240s 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.735 00:26:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:29.735 00:26:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.735 00:26:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64753 ]] 00:07:29.735 00:26:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64753 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64753 ']' 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64753 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64753 00:07:29.735 killing process with pid 64753 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64753' 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64753 00:07:29.735 00:26:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64753 00:07:32.265 00:26:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64783 ]] 00:07:32.265 00:26:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64783 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64783 ']' 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64783 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64783 00:07:32.265 killing process with pid 64783 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64783' 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64783 00:07:32.265 00:26:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64783 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64753 ]] 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64753 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64753 ']' 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64753 00:07:34.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64753) - No such process 00:07:34.165 Process with pid 64753 is not found 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64753 is not found' 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64783 ]] 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64783 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64783 ']' 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64783 00:07:34.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64783) - No such process 00:07:34.165 Process with pid 64783 is not found 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64783 is not found' 00:07:34.165 00:26:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.165 ************************************ 00:07:34.165 END TEST cpu_locks 00:07:34.165 ************************************ 00:07:34.165 00:07:34.165 real 0m50.519s 00:07:34.165 user 1m23.752s 00:07:34.165 sys 0m7.405s 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.165 00:26:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 00:26:39 event -- common/autotest_common.sh@1142 -- # return 0 00:07:34.422 00:07:34.422 real 1m23.540s 00:07:34.422 user 2m26.560s 00:07:34.422 sys 0m11.841s 00:07:34.422 00:26:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.422 ************************************ 00:07:34.422 END TEST event 00:07:34.422 ************************************ 00:07:34.422 00:26:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 00:26:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.422 00:26:39 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:34.422 00:26:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.422 00:26:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.422 00:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 ************************************ 00:07:34.422 START TEST thread 00:07:34.422 ************************************ 00:07:34.422 00:26:39 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:34.422 * Looking for test storage... 00:07:34.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:34.422 00:26:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.422 00:26:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:34.422 00:26:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.422 00:26:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 ************************************ 00:07:34.422 START TEST thread_poller_perf 00:07:34.422 ************************************ 00:07:34.422 00:26:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.422 [2024-07-12 00:26:39.312542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:34.422 [2024-07-12 00:26:39.312776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64987 ] 00:07:34.679 [2024-07-12 00:26:39.492647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.936 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:34.936 [2024-07-12 00:26:39.735830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.308 ====================================== 00:07:36.308 busy:2210942520 (cyc) 00:07:36.308 total_run_count: 303000 00:07:36.308 tsc_hz: 2200000000 (cyc) 00:07:36.308 ====================================== 00:07:36.308 poller_cost: 7296 (cyc), 3316 (nsec) 00:07:36.308 00:07:36.308 real 0m1.884s 00:07:36.308 user 0m1.654s 00:07:36.308 sys 0m0.119s 00:07:36.308 00:26:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.308 ************************************ 00:07:36.308 END TEST thread_poller_perf 00:07:36.308 ************************************ 00:07:36.308 00:26:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.308 00:26:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:36.308 00:26:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.308 00:26:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:36.308 00:26:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.308 00:26:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.308 ************************************ 00:07:36.308 START TEST thread_poller_perf 00:07:36.308 ************************************ 00:07:36.308 00:26:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.566 [2024-07-12 00:26:41.249773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.566 [2024-07-12 00:26:41.249998] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65029 ] 00:07:36.566 [2024-07-12 00:26:41.431452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.824 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:36.824 [2024-07-12 00:26:41.672856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.198 ====================================== 00:07:38.198 busy:2203780810 (cyc) 00:07:38.198 total_run_count: 3751000 00:07:38.198 tsc_hz: 2200000000 (cyc) 00:07:38.198 ====================================== 00:07:38.198 poller_cost: 587 (cyc), 266 (nsec) 00:07:38.198 00:07:38.198 real 0m1.901s 00:07:38.198 user 0m1.665s 00:07:38.198 sys 0m0.125s 00:07:38.198 ************************************ 00:07:38.198 END TEST thread_poller_perf 00:07:38.198 ************************************ 00:07:38.198 00:26:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.198 00:26:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:38.456 00:26:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:38.456 00:26:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:38.456 00:07:38.456 real 0m3.955s 00:07:38.456 user 0m3.373s 00:07:38.456 sys 0m0.355s 00:07:38.456 00:26:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.456 00:26:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.456 ************************************ 00:07:38.456 END TEST thread 00:07:38.456 ************************************ 00:07:38.456 00:26:43 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.456 00:26:43 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:38.456 00:26:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.456 00:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.456 00:26:43 -- common/autotest_common.sh@10 -- # set +x 00:07:38.456 ************************************ 00:07:38.456 START TEST accel 00:07:38.456 ************************************ 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:38.456 * Looking for test storage... 00:07:38.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:38.456 00:26:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:38.456 00:26:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:38.456 00:26:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:38.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.456 00:26:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65110 00:07:38.456 00:26:43 accel -- accel/accel.sh@63 -- # waitforlisten 65110 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@829 -- # '[' -z 65110 ']' 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.456 00:26:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.456 00:26:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:38.456 00:26:43 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:38.456 00:26:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.456 00:26:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.456 00:26:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.456 00:26:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.456 00:26:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.456 00:26:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:38.456 00:26:43 accel -- accel/accel.sh@41 -- # jq -r . 00:07:38.456 [2024-07-12 00:26:43.383145] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.456 [2024-07-12 00:26:43.383326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65110 ] 00:07:38.714 [2024-07-12 00:26:43.553312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.972 [2024-07-12 00:26:43.795306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@862 -- # return 0 00:07:39.907 00:26:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:39.907 00:26:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:39.907 00:26:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:39.907 00:26:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:39.907 00:26:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:39.907 00:26:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.907 00:26:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:39.907 00:26:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:39.907 00:26:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:39.907 00:26:44 accel -- accel/accel.sh@75 -- # killprocess 65110 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@948 -- # '[' -z 65110 ']' 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@952 -- # kill -0 65110 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@953 -- # uname 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65110 00:07:39.907 killing process with pid 65110 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65110' 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@967 -- # kill 65110 00:07:39.907 00:26:44 accel -- common/autotest_common.sh@972 -- # wait 65110 00:07:42.436 00:26:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:42.436 00:26:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:42.436 00:26:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.436 00:26:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.436 00:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 00:26:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:42.436 00:26:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:42.436 00:26:47 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.436 00:26:47 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 00:26:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.436 00:26:47 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:42.436 00:26:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:42.436 00:26:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.436 00:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 ************************************ 00:07:42.436 START TEST accel_missing_filename 00:07:42.436 ************************************ 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.436 00:26:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:42.436 00:26:47 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:42.436 [2024-07-12 00:26:47.116723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.436 [2024-07-12 00:26:47.116883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65209 ] 00:07:42.436 [2024-07-12 00:26:47.285286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.694 [2024-07-12 00:26:47.558854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.951 [2024-07-12 00:26:47.764471] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.517 [2024-07-12 00:26:48.265874] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:43.777 A filename is required. 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.777 00:07:43.777 real 0m1.601s 00:07:43.777 user 0m1.339s 00:07:43.777 sys 0m0.197s 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.777 00:26:48 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:43.777 ************************************ 00:07:43.777 END TEST accel_missing_filename 00:07:43.777 ************************************ 00:07:43.777 00:26:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.777 00:26:48 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.777 00:26:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:43.777 00:26:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.777 00:26:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.036 ************************************ 00:07:44.036 START TEST accel_compress_verify 00:07:44.036 ************************************ 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.036 00:26:48 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:44.036 00:26:48 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:44.036 [2024-07-12 00:26:48.775867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.036 [2024-07-12 00:26:48.776054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65240 ] 00:07:44.036 [2024-07-12 00:26:48.945583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.294 [2024-07-12 00:26:49.189177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.552 [2024-07-12 00:26:49.394957] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.117 [2024-07-12 00:26:49.893569] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:45.374 00:07:45.374 Compression does not support the verify option, aborting. 00:07:45.374 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:45.374 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.374 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:45.374 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.374 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:45.634 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.634 00:07:45.634 real 0m1.590s 00:07:45.634 user 0m1.330s 00:07:45.634 sys 0m0.204s 00:07:45.634 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.634 ************************************ 00:07:45.634 END TEST accel_compress_verify 00:07:45.634 00:26:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 ************************************ 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.634 00:26:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 ************************************ 00:07:45.634 START TEST accel_wrong_workload 00:07:45.634 ************************************ 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:45.634 00:26:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:45.634 Unsupported workload type: foobar 00:07:45.634 [2024-07-12 00:26:50.402160] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:45.634 accel_perf options: 00:07:45.634 [-h help message] 00:07:45.634 [-q queue depth per core] 00:07:45.634 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:45.634 [-T number of threads per core 00:07:45.634 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:45.634 [-t time in seconds] 00:07:45.634 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:45.634 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:45.634 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:45.634 [-l for compress/decompress workloads, name of uncompressed input file 00:07:45.634 [-S for crc32c workload, use this seed value (default 0) 00:07:45.634 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:45.634 [-f for fill workload, use this BYTE value (default 255) 00:07:45.634 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:45.634 [-y verify result if this switch is on] 00:07:45.634 [-a tasks to allocate per core (default: same value as -q)] 00:07:45.634 Can be used to spread operations across a wider range of memory. 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.634 00:07:45.634 real 0m0.068s 00:07:45.634 user 0m0.077s 00:07:45.634 sys 0m0.035s 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.634 00:26:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 ************************************ 00:07:45.634 END TEST accel_wrong_workload 00:07:45.634 ************************************ 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.634 00:26:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.634 00:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.634 ************************************ 00:07:45.634 START TEST accel_negative_buffers 00:07:45.634 ************************************ 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.634 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.634 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:45.635 00:26:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:45.635 -x option must be non-negative. 00:07:45.635 [2024-07-12 00:26:50.514642] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:45.635 accel_perf options: 00:07:45.635 [-h help message] 00:07:45.635 [-q queue depth per core] 00:07:45.635 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:45.635 [-T number of threads per core 00:07:45.635 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:45.635 [-t time in seconds] 00:07:45.635 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:45.635 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:45.635 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:45.635 [-l for compress/decompress workloads, name of uncompressed input file 00:07:45.635 [-S for crc32c workload, use this seed value (default 0) 00:07:45.635 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:45.635 [-f for fill workload, use this BYTE value (default 255) 00:07:45.635 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:45.635 [-y verify result if this switch is on] 00:07:45.635 [-a tasks to allocate per core (default: same value as -q)] 00:07:45.635 Can be used to spread operations across a wider range of memory. 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.635 00:07:45.635 real 0m0.071s 00:07:45.635 user 0m0.080s 00:07:45.635 sys 0m0.033s 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.635 ************************************ 00:07:45.635 END TEST accel_negative_buffers 00:07:45.635 ************************************ 00:07:45.635 00:26:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:45.893 00:26:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.893 00:26:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:45.893 00:26:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.893 00:26:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.893 00:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.893 ************************************ 00:07:45.893 START TEST accel_crc32c 00:07:45.893 ************************************ 00:07:45.893 00:26:50 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:45.893 00:26:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:45.893 [2024-07-12 00:26:50.643494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.893 [2024-07-12 00:26:50.643680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65318 ] 00:07:45.893 [2024-07-12 00:26:50.818104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.460 [2024-07-12 00:26:51.092092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:46.460 00:26:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.361 ************************************ 00:07:48.361 END TEST accel_crc32c 00:07:48.361 ************************************ 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:48.361 00:26:53 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.361 00:07:48.361 real 0m2.638s 00:07:48.361 user 0m2.338s 00:07:48.361 sys 0m0.202s 00:07:48.361 00:26:53 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.361 00:26:53 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:48.361 00:26:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.361 00:26:53 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:48.361 00:26:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:48.361 00:26:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.361 00:26:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.361 ************************************ 00:07:48.361 START TEST accel_crc32c_C2 00:07:48.361 ************************************ 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.361 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:48.619 [2024-07-12 00:26:53.330048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.619 [2024-07-12 00:26:53.330295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65365 ] 00:07:48.619 [2024-07-12 00:26:53.508248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.878 [2024-07-12 00:26:53.753571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.137 00:26:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.037 00:07:51.037 real 0m2.614s 00:07:51.037 user 0m2.323s 00:07:51.037 sys 0m0.195s 00:07:51.037 ************************************ 00:07:51.037 END TEST accel_crc32c_C2 00:07:51.037 ************************************ 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.037 00:26:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:51.037 00:26:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.037 00:26:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:51.037 00:26:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.037 00:26:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.037 00:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.037 ************************************ 00:07:51.037 START TEST accel_copy 00:07:51.037 ************************************ 00:07:51.037 00:26:55 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:51.037 00:26:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:51.294 [2024-07-12 00:26:55.990105] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:51.294 [2024-07-12 00:26:55.990283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65411 ] 00:07:51.294 [2024-07-12 00:26:56.166080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.552 [2024-07-12 00:26:56.439147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.809 00:26:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:53.705 00:26:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.705 00:07:53.705 real 0m2.616s 00:07:53.705 user 0m2.303s 00:07:53.705 sys 0m0.215s 00:07:53.705 00:26:58 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.705 00:26:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:53.705 ************************************ 00:07:53.705 END TEST accel_copy 00:07:53.705 ************************************ 00:07:53.705 00:26:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.705 00:26:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.705 00:26:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:53.705 00:26:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.705 00:26:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.705 ************************************ 00:07:53.705 START TEST accel_fill 00:07:53.705 ************************************ 00:07:53.705 00:26:58 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:53.705 00:26:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:54.039 [2024-07-12 00:26:58.660201] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.039 [2024-07-12 00:26:58.660375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65458 ] 00:07:54.039 [2024-07-12 00:26:58.835476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.296 [2024-07-12 00:26:59.080588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.555 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:54.556 00:26:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:56.454 00:27:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.454 ************************************ 00:07:56.454 END TEST accel_fill 00:07:56.454 ************************************ 00:07:56.454 00:07:56.454 real 0m2.608s 00:07:56.454 user 0m2.323s 00:07:56.454 sys 0m0.186s 00:07:56.454 00:27:01 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.454 00:27:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:56.454 00:27:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.454 00:27:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:56.454 00:27:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.454 00:27:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.454 00:27:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.454 ************************************ 00:07:56.454 START TEST accel_copy_crc32c 00:07:56.454 ************************************ 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:56.454 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:56.454 [2024-07-12 00:27:01.306971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:56.454 [2024-07-12 00:27:01.307156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65510 ] 00:07:56.712 [2024-07-12 00:27:01.472923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.969 [2024-07-12 00:27:01.717911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.229 00:27:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.127 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.128 ************************************ 00:07:59.128 END TEST accel_copy_crc32c 00:07:59.128 ************************************ 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.128 00:07:59.128 real 0m2.602s 00:07:59.128 user 0m2.313s 00:07:59.128 sys 0m0.192s 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.128 00:27:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:59.128 00:27:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.128 00:27:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:59.128 00:27:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:59.128 00:27:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.128 00:27:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.128 ************************************ 00:07:59.128 START TEST accel_copy_crc32c_C2 00:07:59.128 ************************************ 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:59.128 00:27:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:59.128 [2024-07-12 00:27:03.960885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.128 [2024-07-12 00:27:03.961058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65551 ] 00:07:59.432 [2024-07-12 00:27:04.133315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.692 [2024-07-12 00:27:04.375215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.692 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:59.693 00:27:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.594 00:08:01.594 real 0m2.593s 00:08:01.594 user 0m2.308s 00:08:01.594 sys 0m0.189s 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.594 00:27:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:01.594 ************************************ 00:08:01.594 END TEST accel_copy_crc32c_C2 00:08:01.595 ************************************ 00:08:01.853 00:27:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.853 00:27:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:01.853 00:27:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:01.853 00:27:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.853 00:27:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.853 ************************************ 00:08:01.853 START TEST accel_dualcast 00:08:01.853 ************************************ 00:08:01.853 00:27:06 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:01.853 00:27:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:01.853 [2024-07-12 00:27:06.601330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.853 [2024-07-12 00:27:06.601539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65603 ] 00:08:01.853 [2024-07-12 00:27:06.765405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.111 [2024-07-12 00:27:07.024204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.375 00:27:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:04.290 00:27:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.290 00:08:04.290 real 0m2.611s 00:08:04.290 user 0m2.307s 00:08:04.290 sys 0m0.204s 00:08:04.290 ************************************ 00:08:04.290 END TEST accel_dualcast 00:08:04.290 ************************************ 00:08:04.290 00:27:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.290 00:27:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:04.290 00:27:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.290 00:27:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:04.290 00:27:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:04.290 00:27:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.290 00:27:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.290 ************************************ 00:08:04.290 START TEST accel_compare 00:08:04.290 ************************************ 00:08:04.290 00:27:09 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:04.290 00:27:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:04.549 [2024-07-12 00:27:09.256667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:04.549 [2024-07-12 00:27:09.256833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65644 ] 00:08:04.549 [2024-07-12 00:27:09.424490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.808 [2024-07-12 00:27:09.721725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.068 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.069 00:27:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:06.985 00:27:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.985 00:08:06.985 real 0m2.682s 00:08:06.985 user 0m2.382s 00:08:06.985 sys 0m0.201s 00:08:06.985 ************************************ 00:08:06.985 END TEST accel_compare 00:08:06.985 ************************************ 00:08:06.985 00:27:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.985 00:27:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:07.244 00:27:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.244 00:27:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:07.244 00:27:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:07.244 00:27:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.244 00:27:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.244 ************************************ 00:08:07.244 START TEST accel_xor 00:08:07.244 ************************************ 00:08:07.244 00:27:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:07.244 00:27:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:07.244 [2024-07-12 00:27:12.011785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.244 [2024-07-12 00:27:12.011996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65696 ] 00:08:07.504 [2024-07-12 00:27:12.184681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.763 [2024-07-12 00:27:12.439797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.763 00:27:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.662 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.662 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.662 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.662 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.662 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:09.663 ************************************ 00:08:09.663 END TEST accel_xor 00:08:09.663 ************************************ 00:08:09.663 00:27:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.663 00:08:09.663 real 0m2.649s 00:08:09.663 user 0m2.314s 00:08:09.663 sys 0m0.237s 00:08:09.663 00:27:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.663 00:27:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:09.921 00:27:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.921 00:27:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:09.921 00:27:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:09.921 00:27:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.922 00:27:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.922 ************************************ 00:08:09.922 START TEST accel_xor 00:08:09.922 ************************************ 00:08:09.922 00:27:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:09.922 00:27:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:09.922 [2024-07-12 00:27:14.702810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.922 [2024-07-12 00:27:14.703039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65743 ] 00:08:10.180 [2024-07-12 00:27:14.889740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.439 [2024-07-12 00:27:15.177009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:10.698 00:27:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.599 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.600 ************************************ 00:08:12.600 END TEST accel_xor 00:08:12.600 ************************************ 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:12.600 00:27:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.600 00:08:12.600 real 0m2.670s 00:08:12.600 user 0m2.347s 00:08:12.600 sys 0m0.223s 00:08:12.600 00:27:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.600 00:27:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:12.600 00:27:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.600 00:27:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:12.600 00:27:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:12.600 00:27:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.600 00:27:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.600 ************************************ 00:08:12.600 START TEST accel_dif_verify 00:08:12.600 ************************************ 00:08:12.600 00:27:17 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:12.600 00:27:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:12.600 [2024-07-12 00:27:17.407731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.600 [2024-07-12 00:27:17.407883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65789 ] 00:08:12.859 [2024-07-12 00:27:17.574822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.117 [2024-07-12 00:27:17.855294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.375 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:13.376 00:27:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:15.343 00:27:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.343 00:08:15.343 real 0m2.641s 00:08:15.343 user 0m2.330s 00:08:15.343 sys 0m0.211s 00:08:15.343 00:27:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.343 00:27:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:15.343 ************************************ 00:08:15.343 END TEST accel_dif_verify 00:08:15.343 ************************************ 00:08:15.343 00:27:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.343 00:27:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:15.343 00:27:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:15.343 00:27:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.343 00:27:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.343 ************************************ 00:08:15.343 START TEST accel_dif_generate 00:08:15.343 ************************************ 00:08:15.343 00:27:20 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:15.343 00:27:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:15.343 [2024-07-12 00:27:20.106770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.343 [2024-07-12 00:27:20.106935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65836 ] 00:08:15.601 [2024-07-12 00:27:20.285564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.858 [2024-07-12 00:27:20.556407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.858 00:27:20 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:15.859 00:27:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:17.758 00:27:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.758 00:08:17.758 real 0m2.576s 00:08:17.758 user 0m0.016s 00:08:17.758 sys 0m0.005s 00:08:17.758 00:27:22 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.758 ************************************ 00:08:17.758 00:27:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:17.758 END TEST accel_dif_generate 00:08:17.758 ************************************ 00:08:17.758 00:27:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.758 00:27:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:17.758 00:27:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:17.758 00:27:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.758 00:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.758 ************************************ 00:08:17.758 START TEST accel_dif_generate_copy 00:08:17.758 ************************************ 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.758 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.759 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.759 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:17.759 00:27:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:18.016 [2024-07-12 00:27:22.723131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.017 [2024-07-12 00:27:22.723318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65888 ] 00:08:18.017 [2024-07-12 00:27:22.886320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.290 [2024-07-12 00:27:23.116272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.549 00:27:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.450 00:08:20.450 real 0m2.571s 00:08:20.450 user 0m2.287s 00:08:20.450 sys 0m0.190s 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.450 ************************************ 00:08:20.450 END TEST accel_dif_generate_copy 00:08:20.450 ************************************ 00:08:20.450 00:27:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.450 00:27:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.450 00:27:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:20.450 00:27:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.450 00:27:25 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:20.450 00:27:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.450 00:27:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.450 ************************************ 00:08:20.450 START TEST accel_comp 00:08:20.450 ************************************ 00:08:20.450 00:27:25 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:20.450 00:27:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:20.450 [2024-07-12 00:27:25.362151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:20.450 [2024-07-12 00:27:25.362357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65929 ] 00:08:20.708 [2024-07-12 00:27:25.528233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.967 [2024-07-12 00:27:25.777660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:25 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:21.226 00:27:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:23.131 00:27:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.131 00:08:23.131 real 0m2.597s 00:08:23.131 user 0m2.297s 00:08:23.131 sys 0m0.203s 00:08:23.131 00:27:27 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.131 00:27:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:23.131 ************************************ 00:08:23.131 END TEST accel_comp 00:08:23.131 ************************************ 00:08:23.131 00:27:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.131 00:27:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:23.131 00:27:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:23.131 00:27:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.131 00:27:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.131 ************************************ 00:08:23.131 START TEST accel_decomp 00:08:23.131 ************************************ 00:08:23.131 00:27:27 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:23.131 00:27:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:23.131 [2024-07-12 00:27:27.996761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:23.131 [2024-07-12 00:27:27.997468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65981 ] 00:08:23.390 [2024-07-12 00:27:28.159279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.648 [2024-07-12 00:27:28.403280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.905 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:23.906 00:27:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.802 00:27:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.802 00:08:25.802 real 0m2.550s 00:08:25.802 user 0m2.283s 00:08:25.802 sys 0m0.173s 00:08:25.802 00:27:30 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.802 00:27:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:25.802 ************************************ 00:08:25.802 END TEST accel_decomp 00:08:25.802 ************************************ 00:08:25.802 00:27:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.802 00:27:30 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:25.802 00:27:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:25.802 00:27:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.802 00:27:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.802 ************************************ 00:08:25.802 START TEST accel_decomp_full 00:08:25.802 ************************************ 00:08:25.802 00:27:30 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:25.802 00:27:30 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:25.802 [2024-07-12 00:27:30.593120] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:25.802 [2024-07-12 00:27:30.593291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66022 ] 00:08:26.060 [2024-07-12 00:27:30.749500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.060 [2024-07-12 00:27:30.979201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.318 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:26.319 00:27:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.261 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.262 00:27:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.262 00:08:28.262 real 0m2.528s 00:08:28.262 user 0m2.243s 00:08:28.262 sys 0m0.192s 00:08:28.262 00:27:33 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.262 ************************************ 00:08:28.262 END TEST accel_decomp_full 00:08:28.262 ************************************ 00:08:28.262 00:27:33 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 00:27:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.262 00:27:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:28.262 00:27:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:28.262 00:27:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.262 00:27:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 ************************************ 00:08:28.262 START TEST accel_decomp_mcore 00:08:28.262 ************************************ 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:28.262 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:28.262 [2024-07-12 00:27:33.172687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.262 [2024-07-12 00:27:33.172874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66074 ] 00:08:28.520 [2024-07-12 00:27:33.334236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.779 [2024-07-12 00:27:33.574117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.779 [2024-07-12 00:27:33.574277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.779 [2024-07-12 00:27:33.575242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.779 [2024-07-12 00:27:33.575222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.037 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:29.038 00:27:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.938 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.939 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.939 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.939 00:27:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.939 00:08:30.939 real 0m2.572s 00:08:30.939 user 0m7.395s 00:08:30.939 sys 0m0.212s 00:08:30.939 00:27:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.939 00:27:35 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:30.939 ************************************ 00:08:30.939 END TEST accel_decomp_mcore 00:08:30.939 ************************************ 00:08:30.939 00:27:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.939 00:27:35 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.939 00:27:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:30.939 00:27:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.939 00:27:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.939 ************************************ 00:08:30.939 START TEST accel_decomp_full_mcore 00:08:30.939 ************************************ 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:30.939 00:27:35 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:30.939 [2024-07-12 00:27:35.806274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.939 [2024-07-12 00:27:35.806488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66118 ] 00:08:31.197 [2024-07-12 00:27:35.981014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.455 [2024-07-12 00:27:36.222091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.455 [2024-07-12 00:27:36.222248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.455 [2024-07-12 00:27:36.222371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.455 [2024-07-12 00:27:36.222387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.713 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.714 00:27:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.618 00:08:33.618 real 0m2.633s 00:08:33.618 user 0m0.019s 00:08:33.618 sys 0m0.005s 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.618 00:27:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:33.618 ************************************ 00:08:33.618 END TEST accel_decomp_full_mcore 00:08:33.618 ************************************ 00:08:33.618 00:27:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.618 00:27:38 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.618 00:27:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:33.618 00:27:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.618 00:27:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.618 ************************************ 00:08:33.618 START TEST accel_decomp_mthread 00:08:33.618 ************************************ 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:33.618 00:27:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:33.618 [2024-07-12 00:27:38.494310] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:33.618 [2024-07-12 00:27:38.494539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66173 ] 00:08:33.876 [2024-07-12 00:27:38.654399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.135 [2024-07-12 00:27:38.894023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.394 00:27:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.297 00:08:36.297 real 0m2.581s 00:08:36.297 user 0m2.279s 00:08:36.297 sys 0m0.210s 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.297 ************************************ 00:08:36.297 END TEST accel_decomp_mthread 00:08:36.297 ************************************ 00:08:36.297 00:27:41 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:36.297 00:27:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.297 00:27:41 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.297 00:27:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:36.297 00:27:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.297 00:27:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.297 ************************************ 00:08:36.297 START TEST accel_decomp_full_mthread 00:08:36.297 ************************************ 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.297 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.298 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.298 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.298 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.298 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:36.298 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:36.298 [2024-07-12 00:27:41.109679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:36.298 [2024-07-12 00:27:41.109868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66220 ] 00:08:36.556 [2024-07-12 00:27:41.268826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.815 [2024-07-12 00:27:41.502034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.815 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.816 00:27:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.716 00:08:38.716 real 0m2.565s 00:08:38.716 user 0m2.302s 00:08:38.716 sys 0m0.169s 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.716 00:27:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:38.716 ************************************ 00:08:38.716 END TEST accel_decomp_full_mthread 00:08:38.716 ************************************ 00:08:38.974 00:27:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.974 00:27:43 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:38.974 00:27:43 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:38.974 00:27:43 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:38.974 00:27:43 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:38.974 00:27:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.974 00:27:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.974 00:27:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.974 00:27:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.974 00:27:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.974 00:27:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.974 00:27:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.974 00:27:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:38.974 00:27:43 accel -- accel/accel.sh@41 -- # jq -r . 00:08:38.974 ************************************ 00:08:38.974 START TEST accel_dif_functional_tests 00:08:38.974 ************************************ 00:08:38.974 00:27:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:38.974 [2024-07-12 00:27:43.793032] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.974 [2024-07-12 00:27:43.793208] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66267 ] 00:08:39.232 [2024-07-12 00:27:43.965799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.490 [2024-07-12 00:27:44.208651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.490 [2024-07-12 00:27:44.208757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.490 [2024-07-12 00:27:44.208765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.749 00:08:39.749 00:08:39.749 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.749 http://cunit.sourceforge.net/ 00:08:39.749 00:08:39.749 00:08:39.749 Suite: accel_dif 00:08:39.749 Test: verify: DIF generated, GUARD check ...passed 00:08:39.749 Test: verify: DIF generated, APPTAG check ...passed 00:08:39.749 Test: verify: DIF generated, REFTAG check ...passed 00:08:39.749 Test: verify: DIF not generated, GUARD check ...passed 00:08:39.749 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 00:27:44.536554] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:39.749 passed 00:08:39.749 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 00:27:44.536703] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:39.749 [2024-07-12 00:27:44.536765] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:39.749 passed 00:08:39.749 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:39.749 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:39.749 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-12 00:27:44.537016] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:39.749 passed 00:08:39.749 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:39.749 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:39.749 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 00:27:44.537335] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:39.749 passed 00:08:39.749 Test: verify copy: DIF generated, GUARD check ...passed 00:08:39.749 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:39.749 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:39.749 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 00:27:44.537924] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:39.749 passed 00:08:39.749 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:39.749 Test: verify copy: DIF not generated, REFTAG check ...passed 00:08:39.749 Test: generate copy: DIF generated, GUARD check ...passed 00:08:39.749 Test: generate copy: DIF generated, APTTAG check ...[2024-07-12 00:27:44.538002] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:39.749 [2024-07-12 00:27:44.538264] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:39.749 passed 00:08:39.749 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:39.749 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:39.749 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:39.749 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:39.749 Test: generate copy: iovecs-len validate ...passed 00:08:39.749 Test: generate copy: buffer alignment validate ...passed 00:08:39.749 00:08:39.749 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.749 suites 1 1 n/a 0 0 00:08:39.749 tests 26 26 26 0 0 00:08:39.749 asserts 115 115 115 0 n/a 00:08:39.749 00:08:39.749 Elapsed time = 0.007 seconds 00:08:39.749 [2024-07-12 00:27:44.538886] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:41.126 00:08:41.126 real 0m2.008s 00:08:41.126 user 0m3.789s 00:08:41.126 sys 0m0.273s 00:08:41.126 ************************************ 00:08:41.126 END TEST accel_dif_functional_tests 00:08:41.126 ************************************ 00:08:41.126 00:27:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.126 00:27:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:41.126 00:27:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.126 00:08:41.126 real 1m2.550s 00:08:41.126 user 1m7.186s 00:08:41.126 sys 0m6.187s 00:08:41.126 00:27:45 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.126 00:27:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.126 ************************************ 00:08:41.126 END TEST accel 00:08:41.126 ************************************ 00:08:41.126 00:27:45 -- common/autotest_common.sh@1142 -- # return 0 00:08:41.126 00:27:45 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:41.126 00:27:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.126 00:27:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.126 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:41.126 ************************************ 00:08:41.126 START TEST accel_rpc 00:08:41.126 ************************************ 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:41.126 * Looking for test storage... 00:08:41.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:41.126 00:27:45 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:41.126 00:27:45 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66349 00:08:41.126 00:27:45 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66349 00:08:41.126 00:27:45 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66349 ']' 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.126 00:27:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.126 [2024-07-12 00:27:45.997412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.126 [2024-07-12 00:27:45.997602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66349 ] 00:08:41.384 [2024-07-12 00:27:46.172206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.642 [2024-07-12 00:27:46.409945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.209 00:27:46 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.209 00:27:46 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:42.209 00:27:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:42.209 00:27:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:42.209 00:27:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:42.209 00:27:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:42.209 00:27:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:42.209 00:27:46 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.209 00:27:46 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.209 00:27:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 ************************************ 00:08:42.209 START TEST accel_assign_opcode 00:08:42.209 ************************************ 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 [2024-07-12 00:27:46.955041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 [2024-07-12 00:27:46.963030] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:42.209 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.210 00:27:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:42.810 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.067 software 00:08:43.067 00:08:43.067 real 0m0.824s 00:08:43.067 user 0m0.053s 00:08:43.067 sys 0m0.011s 00:08:43.067 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.067 ************************************ 00:08:43.067 END TEST accel_assign_opcode 00:08:43.067 00:27:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:43.067 ************************************ 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:43.067 00:27:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66349 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66349 ']' 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66349 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66349 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66349' 00:08:43.067 killing process with pid 66349 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@967 -- # kill 66349 00:08:43.067 00:27:47 accel_rpc -- common/autotest_common.sh@972 -- # wait 66349 00:08:45.594 00:08:45.594 real 0m4.232s 00:08:45.594 user 0m4.161s 00:08:45.594 sys 0m0.638s 00:08:45.594 00:27:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.594 ************************************ 00:08:45.594 00:27:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.594 END TEST accel_rpc 00:08:45.594 ************************************ 00:08:45.594 00:27:50 -- common/autotest_common.sh@1142 -- # return 0 00:08:45.594 00:27:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:45.594 00:27:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.594 00:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.594 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.594 ************************************ 00:08:45.594 START TEST app_cmdline 00:08:45.594 ************************************ 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:45.594 * Looking for test storage... 00:08:45.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:45.594 00:27:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:45.594 00:27:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66483 00:08:45.594 00:27:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66483 00:08:45.594 00:27:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66483 ']' 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.594 00:27:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:45.594 [2024-07-12 00:27:50.276801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:45.594 [2024-07-12 00:27:50.277011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66483 ] 00:08:45.594 [2024-07-12 00:27:50.456365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.852 [2024-07-12 00:27:50.703276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.785 00:27:51 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.785 00:27:51 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:46.785 00:27:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:47.042 { 00:08:47.042 "fields": { 00:08:47.042 "commit": "719d03c6a", 00:08:47.042 "major": 24, 00:08:47.042 "minor": 9, 00:08:47.042 "patch": 0, 00:08:47.042 "suffix": "-pre" 00:08:47.042 }, 00:08:47.042 "version": "SPDK v24.09-pre git sha1 719d03c6a" 00:08:47.042 } 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:47.042 00:27:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.042 00:27:51 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:47.300 2024/07/12 00:27:52 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:47.300 request: 00:08:47.300 { 00:08:47.300 "method": "env_dpdk_get_mem_stats", 00:08:47.300 "params": {} 00:08:47.300 } 00:08:47.300 Got JSON-RPC error response 00:08:47.300 GoRPCClient: error on JSON-RPC call 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.300 00:27:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66483 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66483 ']' 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66483 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66483 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:47.300 killing process with pid 66483 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66483' 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 66483 00:08:47.300 00:27:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 66483 00:08:49.826 00:08:49.826 real 0m4.292s 00:08:49.826 user 0m4.721s 00:08:49.826 sys 0m0.654s 00:08:49.826 00:27:54 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.826 ************************************ 00:08:49.826 END TEST app_cmdline 00:08:49.826 ************************************ 00:08:49.826 00:27:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:49.826 00:27:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:49.827 00:27:54 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:49.827 00:27:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.827 00:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.827 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:49.827 ************************************ 00:08:49.827 START TEST version 00:08:49.827 ************************************ 00:08:49.827 00:27:54 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:49.827 * Looking for test storage... 00:08:49.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:49.827 00:27:54 version -- app/version.sh@17 -- # get_header_version major 00:08:49.827 00:27:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # cut -f2 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:49.827 00:27:54 version -- app/version.sh@17 -- # major=24 00:08:49.827 00:27:54 version -- app/version.sh@18 -- # get_header_version minor 00:08:49.827 00:27:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # cut -f2 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:49.827 00:27:54 version -- app/version.sh@18 -- # minor=9 00:08:49.827 00:27:54 version -- app/version.sh@19 -- # get_header_version patch 00:08:49.827 00:27:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # cut -f2 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:49.827 00:27:54 version -- app/version.sh@19 -- # patch=0 00:08:49.827 00:27:54 version -- app/version.sh@20 -- # get_header_version suffix 00:08:49.827 00:27:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # cut -f2 00:08:49.827 00:27:54 version -- app/version.sh@14 -- # tr -d '"' 00:08:49.827 00:27:54 version -- app/version.sh@20 -- # suffix=-pre 00:08:49.827 00:27:54 version -- app/version.sh@22 -- # version=24.9 00:08:49.827 00:27:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:49.827 00:27:54 version -- app/version.sh@28 -- # version=24.9rc0 00:08:49.827 00:27:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:49.827 00:27:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:49.827 00:27:54 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:49.827 00:27:54 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:49.827 00:08:49.827 real 0m0.153s 00:08:49.827 user 0m0.080s 00:08:49.827 sys 0m0.104s 00:08:49.827 00:27:54 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.827 00:27:54 version -- common/autotest_common.sh@10 -- # set +x 00:08:49.827 ************************************ 00:08:49.827 END TEST version 00:08:49.827 ************************************ 00:08:49.827 00:27:54 -- common/autotest_common.sh@1142 -- # return 0 00:08:49.827 00:27:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@198 -- # uname -s 00:08:49.827 00:27:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:49.827 00:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:49.827 00:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:49.827 00:27:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:49.827 00:27:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.827 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:49.827 00:27:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:49.827 00:27:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:49.827 00:27:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:49.827 00:27:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.827 00:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.827 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:49.827 ************************************ 00:08:49.827 START TEST nvmf_tcp 00:08:49.827 ************************************ 00:08:49.827 00:27:54 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:49.827 * Looking for test storage... 00:08:49.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.827 00:27:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.086 00:27:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.086 00:27:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.086 00:27:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.086 00:27:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:50.086 00:27:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:50.086 00:27:54 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.086 00:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:50.086 00:27:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.086 00:27:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.086 00:27:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.086 00:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.086 ************************************ 00:08:50.086 START TEST nvmf_example 00:08:50.086 ************************************ 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.086 * Looking for test storage... 00:08:50.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.086 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:50.087 Cannot find device "nvmf_init_br" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:50.087 Cannot find device "nvmf_tgt_br" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.087 Cannot find device "nvmf_tgt_br2" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:50.087 Cannot find device "nvmf_init_br" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:50.087 Cannot find device "nvmf_tgt_br" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:50.087 Cannot find device "nvmf_tgt_br2" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:50.087 Cannot find device "nvmf_br" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:50.087 Cannot find device "nvmf_init_if" 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:08:50.087 00:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.087 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:08:50.087 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.087 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:50.345 00:08:50.345 --- 10.0.0.2 ping statistics --- 00:08:50.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.345 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:50.345 00:08:50.345 --- 10.0.0.3 ping statistics --- 00:08:50.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.345 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:50.345 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:50.603 00:08:50.603 --- 10.0.0.1 ping statistics --- 00:08:50.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.603 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=66864 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 66864 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 66864 ']' 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.603 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.604 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.604 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.604 00:27:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.550 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:51.808 00:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:03.999 Initializing NVMe Controllers 00:09:03.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.999 Initialization complete. Launching workers. 00:09:03.999 ======================================================== 00:09:03.999 Latency(us) 00:09:03.999 Device Information : IOPS MiB/s Average min max 00:09:03.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12518.63 48.90 5112.09 1180.24 20235.84 00:09:03.999 ======================================================== 00:09:03.999 Total : 12518.63 48.90 5112.09 1180.24 20235.84 00:09:03.999 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.999 rmmod nvme_tcp 00:09:03.999 rmmod nvme_fabrics 00:09:03.999 rmmod nvme_keyring 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 66864 ']' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 66864 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 66864 ']' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 66864 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66864 00:09:03.999 killing process with pid 66864 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66864' 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 66864 00:09:03.999 00:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 66864 00:09:03.999 nvmf threads initialize successfully 00:09:03.999 bdev subsystem init successfully 00:09:03.999 created a nvmf target service 00:09:03.999 create targets's poll groups done 00:09:03.999 all subsystems of target started 00:09:03.999 nvmf target is running 00:09:03.999 all subsystems of target stopped 00:09:03.999 destroy targets's poll groups done 00:09:03.999 destroyed the nvmf target service 00:09:03.999 bdev subsystem finish successfully 00:09:03.999 nvmf threads destroy successfully 00:09:03.999 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.999 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.999 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:04.000 00:09:04.000 real 0m13.496s 00:09:04.000 user 0m47.650s 00:09:04.000 sys 0m1.948s 00:09:04.000 ************************************ 00:09:04.000 END TEST nvmf_example 00:09:04.000 ************************************ 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.000 00:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:04.000 00:28:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.000 00:28:08 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:04.000 00:28:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.000 00:28:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.000 00:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.000 ************************************ 00:09:04.000 START TEST nvmf_filesystem 00:09:04.000 ************************************ 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:04.000 * Looking for test storage... 00:09:04.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:04.000 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:04.001 #define SPDK_CONFIG_H 00:09:04.001 #define SPDK_CONFIG_APPS 1 00:09:04.001 #define SPDK_CONFIG_ARCH native 00:09:04.001 #define SPDK_CONFIG_ASAN 1 00:09:04.001 #define SPDK_CONFIG_AVAHI 1 00:09:04.001 #undef SPDK_CONFIG_CET 00:09:04.001 #define SPDK_CONFIG_COVERAGE 1 00:09:04.001 #define SPDK_CONFIG_CROSS_PREFIX 00:09:04.001 #undef SPDK_CONFIG_CRYPTO 00:09:04.001 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:04.001 #undef SPDK_CONFIG_CUSTOMOCF 00:09:04.001 #undef SPDK_CONFIG_DAOS 00:09:04.001 #define SPDK_CONFIG_DAOS_DIR 00:09:04.001 #define SPDK_CONFIG_DEBUG 1 00:09:04.001 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:04.001 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:04.001 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:04.001 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:04.001 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:04.001 #undef SPDK_CONFIG_DPDK_UADK 00:09:04.001 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:04.001 #define SPDK_CONFIG_EXAMPLES 1 00:09:04.001 #undef SPDK_CONFIG_FC 00:09:04.001 #define SPDK_CONFIG_FC_PATH 00:09:04.001 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:04.001 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:04.001 #undef SPDK_CONFIG_FUSE 00:09:04.001 #undef SPDK_CONFIG_FUZZER 00:09:04.001 #define SPDK_CONFIG_FUZZER_LIB 00:09:04.001 #define SPDK_CONFIG_GOLANG 1 00:09:04.001 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:04.001 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:04.001 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:04.001 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:04.001 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:04.001 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:04.001 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:04.001 #define SPDK_CONFIG_IDXD 1 00:09:04.001 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:04.001 #undef SPDK_CONFIG_IPSEC_MB 00:09:04.001 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:04.001 #define SPDK_CONFIG_ISAL 1 00:09:04.001 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:04.001 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:04.001 #define SPDK_CONFIG_LIBDIR 00:09:04.001 #undef SPDK_CONFIG_LTO 00:09:04.001 #define SPDK_CONFIG_MAX_LCORES 128 00:09:04.001 #define SPDK_CONFIG_NVME_CUSE 1 00:09:04.001 #undef SPDK_CONFIG_OCF 00:09:04.001 #define SPDK_CONFIG_OCF_PATH 00:09:04.001 #define SPDK_CONFIG_OPENSSL_PATH 00:09:04.001 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:04.001 #define SPDK_CONFIG_PGO_DIR 00:09:04.001 #undef SPDK_CONFIG_PGO_USE 00:09:04.001 #define SPDK_CONFIG_PREFIX /usr/local 00:09:04.001 #undef SPDK_CONFIG_RAID5F 00:09:04.001 #undef SPDK_CONFIG_RBD 00:09:04.001 #define SPDK_CONFIG_RDMA 1 00:09:04.001 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:04.001 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:04.001 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:04.001 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:04.001 #define SPDK_CONFIG_SHARED 1 00:09:04.001 #undef SPDK_CONFIG_SMA 00:09:04.001 #define SPDK_CONFIG_TESTS 1 00:09:04.001 #undef SPDK_CONFIG_TSAN 00:09:04.001 #define SPDK_CONFIG_UBLK 1 00:09:04.001 #define SPDK_CONFIG_UBSAN 1 00:09:04.001 #undef SPDK_CONFIG_UNIT_TESTS 00:09:04.001 #undef SPDK_CONFIG_URING 00:09:04.001 #define SPDK_CONFIG_URING_PATH 00:09:04.001 #undef SPDK_CONFIG_URING_ZNS 00:09:04.001 #define SPDK_CONFIG_USDT 1 00:09:04.001 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:04.001 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:04.001 #define SPDK_CONFIG_VFIO_USER 1 00:09:04.001 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:04.001 #define SPDK_CONFIG_VHOST 1 00:09:04.001 #define SPDK_CONFIG_VIRTIO 1 00:09:04.001 #undef SPDK_CONFIG_VTUNE 00:09:04.001 #define SPDK_CONFIG_VTUNE_DIR 00:09:04.001 #define SPDK_CONFIG_WERROR 1 00:09:04.001 #define SPDK_CONFIG_WPDK_DIR 00:09:04.001 #undef SPDK_CONFIG_XNVME 00:09:04.001 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:04.001 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:04.002 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 67123 ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 67123 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.46FS7x 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.46FS7x/tests/target /tmp/spdk.46FS7x 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6263177216 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13750145024 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5280526336 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13750145024 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5280526336 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93477494784 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6225285120 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:04.003 * Looking for test storage... 00:09:04.003 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13750145024 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:04.004 Cannot find device "nvmf_tgt_br" 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:09:04.004 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.005 Cannot find device "nvmf_tgt_br2" 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:04.005 Cannot find device "nvmf_tgt_br" 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:04.005 Cannot find device "nvmf_tgt_br2" 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:04.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:09:04.005 00:09:04.005 --- 10.0.0.2 ping statistics --- 00:09:04.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.005 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:04.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:04.005 00:09:04.005 --- 10.0.0.3 ping statistics --- 00:09:04.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.005 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:04.005 00:09:04.005 --- 10.0.0.1 ping statistics --- 00:09:04.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.005 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.005 ************************************ 00:09:04.005 START TEST nvmf_filesystem_no_in_capsule 00:09:04.005 ************************************ 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=67283 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 67283 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 67283 ']' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.005 00:28:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.328 [2024-07-12 00:28:09.034283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:04.328 [2024-07-12 00:28:09.034499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.328 [2024-07-12 00:28:09.210880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.586 [2024-07-12 00:28:09.435774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.586 [2024-07-12 00:28:09.435876] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.586 [2024-07-12 00:28:09.435892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.586 [2024-07-12 00:28:09.435918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.586 [2024-07-12 00:28:09.435929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.586 [2024-07-12 00:28:09.436156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.586 [2024-07-12 00:28:09.436310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.586 [2024-07-12 00:28:09.437186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.586 [2024-07-12 00:28:09.437204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 [2024-07-12 00:28:10.059379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.152 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 Malloc1 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.717 [2024-07-12 00:28:10.645286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.717 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:05.975 { 00:09:05.975 "aliases": [ 00:09:05.975 "575323f5-50c1-44b3-b4e3-9e7ff5779841" 00:09:05.975 ], 00:09:05.975 "assigned_rate_limits": { 00:09:05.975 "r_mbytes_per_sec": 0, 00:09:05.975 "rw_ios_per_sec": 0, 00:09:05.975 "rw_mbytes_per_sec": 0, 00:09:05.975 "w_mbytes_per_sec": 0 00:09:05.975 }, 00:09:05.975 "block_size": 512, 00:09:05.975 "claim_type": "exclusive_write", 00:09:05.975 "claimed": true, 00:09:05.975 "driver_specific": {}, 00:09:05.975 "memory_domains": [ 00:09:05.975 { 00:09:05.975 "dma_device_id": "system", 00:09:05.975 "dma_device_type": 1 00:09:05.975 }, 00:09:05.975 { 00:09:05.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.975 "dma_device_type": 2 00:09:05.975 } 00:09:05.975 ], 00:09:05.975 "name": "Malloc1", 00:09:05.975 "num_blocks": 1048576, 00:09:05.975 "product_name": "Malloc disk", 00:09:05.975 "supported_io_types": { 00:09:05.975 "abort": true, 00:09:05.975 "compare": false, 00:09:05.975 "compare_and_write": false, 00:09:05.975 "copy": true, 00:09:05.975 "flush": true, 00:09:05.975 "get_zone_info": false, 00:09:05.975 "nvme_admin": false, 00:09:05.975 "nvme_io": false, 00:09:05.975 "nvme_io_md": false, 00:09:05.975 "nvme_iov_md": false, 00:09:05.975 "read": true, 00:09:05.975 "reset": true, 00:09:05.975 "seek_data": false, 00:09:05.975 "seek_hole": false, 00:09:05.975 "unmap": true, 00:09:05.975 "write": true, 00:09:05.975 "write_zeroes": true, 00:09:05.975 "zcopy": true, 00:09:05.975 "zone_append": false, 00:09:05.975 "zone_management": false 00:09:05.975 }, 00:09:05.975 "uuid": "575323f5-50c1-44b3-b4e3-9e7ff5779841", 00:09:05.975 "zoned": false 00:09:05.975 } 00:09:05.975 ]' 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:05.975 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.233 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.233 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.233 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.233 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:06.233 00:28:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:08.149 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:08.150 00:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:08.150 00:28:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:08.445 00:28:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.375 ************************************ 00:09:09.375 START TEST filesystem_ext4 00:09:09.375 ************************************ 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:09.375 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:09.375 mke2fs 1.46.5 (30-Dec-2021) 00:09:09.375 Discarding device blocks: 0/522240 done 00:09:09.632 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:09.632 Filesystem UUID: 9e11f4d3-75d2-49a8-b2c7-ba7fbabb0e43 00:09:09.632 Superblock backups stored on blocks: 00:09:09.632 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:09.632 00:09:09.632 Allocating group tables: 0/64 done 00:09:09.632 Writing inode tables: 0/64 done 00:09:09.632 Creating journal (8192 blocks): done 00:09:09.632 Writing superblocks and filesystem accounting information: 0/64 done 00:09:09.632 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:09.632 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 67283 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:09.889 ************************************ 00:09:09.889 END TEST filesystem_ext4 00:09:09.889 ************************************ 00:09:09.889 00:09:09.889 real 0m0.458s 00:09:09.889 user 0m0.030s 00:09:09.889 sys 0m0.056s 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.889 ************************************ 00:09:09.889 START TEST filesystem_btrfs 00:09:09.889 ************************************ 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:09.889 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:10.146 btrfs-progs v6.6.2 00:09:10.146 See https://btrfs.readthedocs.io for more information. 00:09:10.146 00:09:10.146 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:10.146 NOTE: several default settings have changed in version 5.15, please make sure 00:09:10.146 this does not affect your deployments: 00:09:10.146 - DUP for metadata (-m dup) 00:09:10.146 - enabled no-holes (-O no-holes) 00:09:10.146 - enabled free-space-tree (-R free-space-tree) 00:09:10.146 00:09:10.146 Label: (null) 00:09:10.146 UUID: 13d6c4f7-45c4-445d-8d9b-1f940ad5b78d 00:09:10.146 Node size: 16384 00:09:10.146 Sector size: 4096 00:09:10.146 Filesystem size: 510.00MiB 00:09:10.146 Block group profiles: 00:09:10.146 Data: single 8.00MiB 00:09:10.146 Metadata: DUP 32.00MiB 00:09:10.146 System: DUP 8.00MiB 00:09:10.146 SSD detected: yes 00:09:10.146 Zoned device: no 00:09:10.146 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:10.146 Runtime features: free-space-tree 00:09:10.146 Checksum: crc32c 00:09:10.146 Number of devices: 1 00:09:10.146 Devices: 00:09:10.146 ID SIZE PATH 00:09:10.146 1 510.00MiB /dev/nvme0n1p1 00:09:10.146 00:09:10.146 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:10.146 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.146 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.146 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:10.146 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 67283 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.147 ************************************ 00:09:10.147 END TEST filesystem_btrfs 00:09:10.147 ************************************ 00:09:10.147 00:09:10.147 real 0m0.296s 00:09:10.147 user 0m0.022s 00:09:10.147 sys 0m0.064s 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.147 ************************************ 00:09:10.147 START TEST filesystem_xfs 00:09:10.147 ************************************ 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:10.147 00:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:10.404 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:10.404 = sectsz=512 attr=2, projid32bit=1 00:09:10.404 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:10.404 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:10.404 data = bsize=4096 blocks=130560, imaxpct=25 00:09:10.404 = sunit=0 swidth=0 blks 00:09:10.404 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:10.404 log =internal log bsize=4096 blocks=16384, version=2 00:09:10.404 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:10.404 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:10.970 Discarding blocks...Done. 00:09:10.970 00:28:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:10.970 00:28:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.497 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.498 00:09:13.498 real 0m3.134s 00:09:13.498 user 0m0.025s 00:09:13.498 sys 0m0.057s 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 ************************************ 00:09:13.498 END TEST filesystem_xfs 00:09:13.498 ************************************ 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 67283 ']' 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.498 killing process with pid 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67283' 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 67283 00:09:13.498 00:28:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 67283 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:16.026 00:09:16.026 real 0m11.871s 00:09:16.026 user 0m43.510s 00:09:16.026 sys 0m1.738s 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.026 ************************************ 00:09:16.026 END TEST nvmf_filesystem_no_in_capsule 00:09:16.026 ************************************ 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.026 ************************************ 00:09:16.026 START TEST nvmf_filesystem_in_capsule 00:09:16.026 ************************************ 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=67624 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 67624 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 67624 ']' 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.026 00:28:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.337 [2024-07-12 00:28:20.960219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:16.337 [2024-07-12 00:28:20.960425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.337 [2024-07-12 00:28:21.128848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.594 [2024-07-12 00:28:21.376333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.594 [2024-07-12 00:28:21.376427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.594 [2024-07-12 00:28:21.376445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.594 [2024-07-12 00:28:21.376459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.594 [2024-07-12 00:28:21.376470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.594 [2024-07-12 00:28:21.376736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.594 [2024-07-12 00:28:21.376874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.594 [2024-07-12 00:28:21.377345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.594 [2024-07-12 00:28:21.377360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.160 [2024-07-12 00:28:21.978608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.160 00:28:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.160 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:17.160 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.160 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.730 Malloc1 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.730 [2024-07-12 00:28:22.567936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:17.730 { 00:09:17.730 "aliases": [ 00:09:17.730 "da5fd743-f1f1-4004-a5e0-0a11892ea52c" 00:09:17.730 ], 00:09:17.730 "assigned_rate_limits": { 00:09:17.730 "r_mbytes_per_sec": 0, 00:09:17.730 "rw_ios_per_sec": 0, 00:09:17.730 "rw_mbytes_per_sec": 0, 00:09:17.730 "w_mbytes_per_sec": 0 00:09:17.730 }, 00:09:17.730 "block_size": 512, 00:09:17.730 "claim_type": "exclusive_write", 00:09:17.730 "claimed": true, 00:09:17.730 "driver_specific": {}, 00:09:17.730 "memory_domains": [ 00:09:17.730 { 00:09:17.730 "dma_device_id": "system", 00:09:17.730 "dma_device_type": 1 00:09:17.730 }, 00:09:17.730 { 00:09:17.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.730 "dma_device_type": 2 00:09:17.730 } 00:09:17.730 ], 00:09:17.730 "name": "Malloc1", 00:09:17.730 "num_blocks": 1048576, 00:09:17.730 "product_name": "Malloc disk", 00:09:17.730 "supported_io_types": { 00:09:17.730 "abort": true, 00:09:17.730 "compare": false, 00:09:17.730 "compare_and_write": false, 00:09:17.730 "copy": true, 00:09:17.730 "flush": true, 00:09:17.730 "get_zone_info": false, 00:09:17.730 "nvme_admin": false, 00:09:17.730 "nvme_io": false, 00:09:17.730 "nvme_io_md": false, 00:09:17.730 "nvme_iov_md": false, 00:09:17.730 "read": true, 00:09:17.730 "reset": true, 00:09:17.730 "seek_data": false, 00:09:17.730 "seek_hole": false, 00:09:17.730 "unmap": true, 00:09:17.730 "write": true, 00:09:17.730 "write_zeroes": true, 00:09:17.730 "zcopy": true, 00:09:17.730 "zone_append": false, 00:09:17.730 "zone_management": false 00:09:17.730 }, 00:09:17.730 "uuid": "da5fd743-f1f1-4004-a5e0-0a11892ea52c", 00:09:17.730 "zoned": false 00:09:17.730 } 00:09:17.730 ]' 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:17.730 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.988 00:28:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:20.514 00:28:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:20.514 00:28:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.450 ************************************ 00:09:21.450 START TEST filesystem_in_capsule_ext4 00:09:21.450 ************************************ 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:21.450 mke2fs 1.46.5 (30-Dec-2021) 00:09:21.450 Discarding device blocks: 0/522240 done 00:09:21.450 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:21.450 Filesystem UUID: dc246c75-b0e5-4fee-9691-224af2bc233c 00:09:21.450 Superblock backups stored on blocks: 00:09:21.450 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:21.450 00:09:21.450 Allocating group tables: 0/64 done 00:09:21.450 Writing inode tables: 0/64 done 00:09:21.450 Creating journal (8192 blocks): done 00:09:21.450 Writing superblocks and filesystem accounting information: 0/64 done 00:09:21.450 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:21.450 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 67624 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.710 00:09:21.710 real 0m0.368s 00:09:21.710 user 0m0.024s 00:09:21.710 sys 0m0.051s 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:21.710 ************************************ 00:09:21.710 END TEST filesystem_in_capsule_ext4 00:09:21.710 ************************************ 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.710 ************************************ 00:09:21.710 START TEST filesystem_in_capsule_btrfs 00:09:21.710 ************************************ 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:21.710 btrfs-progs v6.6.2 00:09:21.710 See https://btrfs.readthedocs.io for more information. 00:09:21.710 00:09:21.710 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:21.710 NOTE: several default settings have changed in version 5.15, please make sure 00:09:21.710 this does not affect your deployments: 00:09:21.710 - DUP for metadata (-m dup) 00:09:21.710 - enabled no-holes (-O no-holes) 00:09:21.710 - enabled free-space-tree (-R free-space-tree) 00:09:21.710 00:09:21.710 Label: (null) 00:09:21.710 UUID: 62991b33-92f9-4265-8a2a-9ca1ae8585d9 00:09:21.710 Node size: 16384 00:09:21.710 Sector size: 4096 00:09:21.710 Filesystem size: 510.00MiB 00:09:21.710 Block group profiles: 00:09:21.710 Data: single 8.00MiB 00:09:21.710 Metadata: DUP 32.00MiB 00:09:21.710 System: DUP 8.00MiB 00:09:21.710 SSD detected: yes 00:09:21.710 Zoned device: no 00:09:21.710 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:21.710 Runtime features: free-space-tree 00:09:21.710 Checksum: crc32c 00:09:21.710 Number of devices: 1 00:09:21.710 Devices: 00:09:21.710 ID SIZE PATH 00:09:21.710 1 510.00MiB /dev/nvme0n1p1 00:09:21.710 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:21.710 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 67624 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.968 00:09:21.968 real 0m0.238s 00:09:21.968 user 0m0.021s 00:09:21.968 sys 0m0.064s 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:21.968 ************************************ 00:09:21.968 END TEST filesystem_in_capsule_btrfs 00:09:21.968 ************************************ 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.968 ************************************ 00:09:21.968 START TEST filesystem_in_capsule_xfs 00:09:21.968 ************************************ 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:21.968 00:28:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:21.968 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:21.968 = sectsz=512 attr=2, projid32bit=1 00:09:21.968 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:21.968 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:21.968 data = bsize=4096 blocks=130560, imaxpct=25 00:09:21.968 = sunit=0 swidth=0 blks 00:09:21.968 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:21.968 log =internal log bsize=4096 blocks=16384, version=2 00:09:21.968 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:21.968 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:22.898 Discarding blocks...Done. 00:09:22.898 00:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:22.898 00:28:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.892 00:09:24.892 real 0m2.635s 00:09:24.892 user 0m0.021s 00:09:24.892 sys 0m0.050s 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:24.892 ************************************ 00:09:24.892 END TEST filesystem_in_capsule_xfs 00:09:24.892 ************************************ 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 67624 ']' 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67624' 00:09:24.892 killing process with pid 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 67624 00:09:24.892 00:28:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 67624 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:27.424 00:09:27.424 real 0m11.235s 00:09:27.424 user 0m40.959s 00:09:27.424 sys 0m1.697s 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.424 ************************************ 00:09:27.424 END TEST nvmf_filesystem_in_capsule 00:09:27.424 ************************************ 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.424 rmmod nvme_tcp 00:09:27.424 rmmod nvme_fabrics 00:09:27.424 rmmod nvme_keyring 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.424 00:09:27.424 real 0m23.929s 00:09:27.424 user 1m24.692s 00:09:27.424 sys 0m3.848s 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.424 ************************************ 00:09:27.424 END TEST nvmf_filesystem 00:09:27.424 00:28:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.424 ************************************ 00:09:27.424 00:28:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.424 00:28:32 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:27.424 00:28:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.425 00:28:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.425 00:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.425 ************************************ 00:09:27.425 START TEST nvmf_target_discovery 00:09:27.425 ************************************ 00:09:27.425 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:27.684 * Looking for test storage... 00:09:27.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.684 00:28:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.685 Cannot find device "nvmf_tgt_br" 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.685 Cannot find device "nvmf_tgt_br2" 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.685 Cannot find device "nvmf_tgt_br" 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.685 Cannot find device "nvmf_tgt_br2" 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.685 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.944 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:27.944 00:09:27.945 --- 10.0.0.2 ping statistics --- 00:09:27.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.945 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:27.945 00:09:27.945 --- 10.0.0.3 ping statistics --- 00:09:27.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.945 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:27.945 00:09:27.945 --- 10.0.0.1 ping statistics --- 00:09:27.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.945 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=68122 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 68122 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 68122 ']' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.945 00:28:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:28.203 [2024-07-12 00:28:32.898963] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.203 [2024-07-12 00:28:32.899181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.203 [2024-07-12 00:28:33.082924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.770 [2024-07-12 00:28:33.402214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.770 [2024-07-12 00:28:33.402320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.770 [2024-07-12 00:28:33.402338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.770 [2024-07-12 00:28:33.402357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.770 [2024-07-12 00:28:33.402369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.770 [2024-07-12 00:28:33.402620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.770 [2024-07-12 00:28:33.402768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.770 [2024-07-12 00:28:33.403430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.770 [2024-07-12 00:28:33.403450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 [2024-07-12 00:28:33.873317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 Null1 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 [2024-07-12 00:28:33.942100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 Null2 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.029 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.288 Null3 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:29.288 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 Null4 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 4420 00:09:29.289 00:09:29.289 Discovery Log Number of Records 6, Generation counter 6 00:09:29.289 =====Discovery Log Entry 0====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: current discovery subsystem 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4420 00:09:29.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: explicit discovery connections, duplicate discovery information 00:09:29.289 sectype: none 00:09:29.289 =====Discovery Log Entry 1====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: nvme subsystem 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4420 00:09:29.289 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: none 00:09:29.289 sectype: none 00:09:29.289 =====Discovery Log Entry 2====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: nvme subsystem 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4420 00:09:29.289 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: none 00:09:29.289 sectype: none 00:09:29.289 =====Discovery Log Entry 3====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: nvme subsystem 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4420 00:09:29.289 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: none 00:09:29.289 sectype: none 00:09:29.289 =====Discovery Log Entry 4====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: nvme subsystem 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4420 00:09:29.289 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: none 00:09:29.289 sectype: none 00:09:29.289 =====Discovery Log Entry 5====== 00:09:29.289 trtype: tcp 00:09:29.289 adrfam: ipv4 00:09:29.289 subtype: discovery subsystem referral 00:09:29.289 treq: not required 00:09:29.289 portid: 0 00:09:29.289 trsvcid: 4430 00:09:29.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:29.289 traddr: 10.0.0.2 00:09:29.289 eflags: none 00:09:29.289 sectype: none 00:09:29.289 Perform nvmf subsystem discovery via RPC 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.289 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.289 [ 00:09:29.289 { 00:09:29.289 "allow_any_host": true, 00:09:29.289 "hosts": [], 00:09:29.289 "listen_addresses": [ 00:09:29.289 { 00:09:29.289 "adrfam": "IPv4", 00:09:29.289 "traddr": "10.0.0.2", 00:09:29.289 "trsvcid": "4420", 00:09:29.289 "trtype": "TCP" 00:09:29.289 } 00:09:29.289 ], 00:09:29.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:29.289 "subtype": "Discovery" 00:09:29.289 }, 00:09:29.289 { 00:09:29.289 "allow_any_host": true, 00:09:29.289 "hosts": [], 00:09:29.289 "listen_addresses": [ 00:09:29.289 { 00:09:29.289 "adrfam": "IPv4", 00:09:29.289 "traddr": "10.0.0.2", 00:09:29.289 "trsvcid": "4420", 00:09:29.289 "trtype": "TCP" 00:09:29.289 } 00:09:29.289 ], 00:09:29.289 "max_cntlid": 65519, 00:09:29.289 "max_namespaces": 32, 00:09:29.289 "min_cntlid": 1, 00:09:29.289 "model_number": "SPDK bdev Controller", 00:09:29.289 "namespaces": [ 00:09:29.289 { 00:09:29.289 "bdev_name": "Null1", 00:09:29.289 "name": "Null1", 00:09:29.289 "nguid": "9761D17BD54C49988DB3A97365048405", 00:09:29.289 "nsid": 1, 00:09:29.289 "uuid": "9761d17b-d54c-4998-8db3-a97365048405" 00:09:29.289 } 00:09:29.289 ], 00:09:29.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.289 "serial_number": "SPDK00000000000001", 00:09:29.289 "subtype": "NVMe" 00:09:29.289 }, 00:09:29.289 { 00:09:29.289 "allow_any_host": true, 00:09:29.289 "hosts": [], 00:09:29.289 "listen_addresses": [ 00:09:29.289 { 00:09:29.289 "adrfam": "IPv4", 00:09:29.289 "traddr": "10.0.0.2", 00:09:29.289 "trsvcid": "4420", 00:09:29.289 "trtype": "TCP" 00:09:29.289 } 00:09:29.289 ], 00:09:29.289 "max_cntlid": 65519, 00:09:29.289 "max_namespaces": 32, 00:09:29.289 "min_cntlid": 1, 00:09:29.289 "model_number": "SPDK bdev Controller", 00:09:29.289 "namespaces": [ 00:09:29.289 { 00:09:29.289 "bdev_name": "Null2", 00:09:29.289 "name": "Null2", 00:09:29.289 "nguid": "B74CEAF291A0403796D0721771E72CD0", 00:09:29.289 "nsid": 1, 00:09:29.289 "uuid": "b74ceaf2-91a0-4037-96d0-721771e72cd0" 00:09:29.289 } 00:09:29.289 ], 00:09:29.289 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:29.289 "serial_number": "SPDK00000000000002", 00:09:29.289 "subtype": "NVMe" 00:09:29.289 }, 00:09:29.289 { 00:09:29.289 "allow_any_host": true, 00:09:29.289 "hosts": [], 00:09:29.289 "listen_addresses": [ 00:09:29.290 { 00:09:29.290 "adrfam": "IPv4", 00:09:29.290 "traddr": "10.0.0.2", 00:09:29.290 "trsvcid": "4420", 00:09:29.290 "trtype": "TCP" 00:09:29.290 } 00:09:29.290 ], 00:09:29.290 "max_cntlid": 65519, 00:09:29.290 "max_namespaces": 32, 00:09:29.290 "min_cntlid": 1, 00:09:29.290 "model_number": "SPDK bdev Controller", 00:09:29.290 "namespaces": [ 00:09:29.290 { 00:09:29.290 "bdev_name": "Null3", 00:09:29.290 "name": "Null3", 00:09:29.290 "nguid": "78BAAEEE104B4CEDAF92217DAB4789AE", 00:09:29.290 "nsid": 1, 00:09:29.290 "uuid": "78baaeee-104b-4ced-af92-217dab4789ae" 00:09:29.290 } 00:09:29.290 ], 00:09:29.290 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:29.290 "serial_number": "SPDK00000000000003", 00:09:29.290 "subtype": "NVMe" 00:09:29.290 }, 00:09:29.290 { 00:09:29.290 "allow_any_host": true, 00:09:29.290 "hosts": [], 00:09:29.290 "listen_addresses": [ 00:09:29.290 { 00:09:29.290 "adrfam": "IPv4", 00:09:29.290 "traddr": "10.0.0.2", 00:09:29.290 "trsvcid": "4420", 00:09:29.290 "trtype": "TCP" 00:09:29.290 } 00:09:29.290 ], 00:09:29.290 "max_cntlid": 65519, 00:09:29.290 "max_namespaces": 32, 00:09:29.290 "min_cntlid": 1, 00:09:29.290 "model_number": "SPDK bdev Controller", 00:09:29.290 "namespaces": [ 00:09:29.290 { 00:09:29.290 "bdev_name": "Null4", 00:09:29.290 "name": "Null4", 00:09:29.290 "nguid": "52A8C2485EB9471DAE849782161EC5C5", 00:09:29.290 "nsid": 1, 00:09:29.290 "uuid": "52a8c248-5eb9-471d-ae84-9782161ec5c5" 00:09:29.290 } 00:09:29.290 ], 00:09:29.290 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:29.290 "serial_number": "SPDK00000000000004", 00:09:29.290 "subtype": "NVMe" 00:09:29.290 } 00:09:29.290 ] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.290 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.548 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:29.549 rmmod nvme_tcp 00:09:29.549 rmmod nvme_fabrics 00:09:29.549 rmmod nvme_keyring 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 68122 ']' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 68122 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 68122 ']' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 68122 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68122 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:29.549 killing process with pid 68122 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68122' 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 68122 00:09:29.549 00:28:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 68122 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:30.920 00:09:30.920 real 0m3.495s 00:09:30.920 user 0m8.629s 00:09:30.920 sys 0m0.780s 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.920 ************************************ 00:09:30.920 END TEST nvmf_target_discovery 00:09:30.920 00:28:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:30.920 ************************************ 00:09:30.920 00:28:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.921 00:28:35 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:30.921 00:28:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.921 00:28:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.921 00:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.921 ************************************ 00:09:30.921 START TEST nvmf_referrals 00:09:30.921 ************************************ 00:09:30.921 00:28:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:31.179 * Looking for test storage... 00:09:31.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:31.179 Cannot find device "nvmf_tgt_br" 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:09:31.179 00:28:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.179 Cannot find device "nvmf_tgt_br2" 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:31.179 Cannot find device "nvmf_tgt_br" 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:31.179 Cannot find device "nvmf_tgt_br2" 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.179 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.438 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:31.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:31.438 00:09:31.439 --- 10.0.0.2 ping statistics --- 00:09:31.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.439 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:31.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:31.439 00:09:31.439 --- 10.0.0.3 ping statistics --- 00:09:31.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.439 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:31.439 00:09:31.439 --- 10.0.0.1 ping statistics --- 00:09:31.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.439 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=68370 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 68370 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 68370 ']' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.439 00:28:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.697 [2024-07-12 00:28:36.478113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:31.697 [2024-07-12 00:28:36.478294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.955 [2024-07-12 00:28:36.662035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.213 [2024-07-12 00:28:36.955200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.213 [2024-07-12 00:28:36.955306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.213 [2024-07-12 00:28:36.955324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.213 [2024-07-12 00:28:36.955339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.213 [2024-07-12 00:28:36.955352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.213 [2024-07-12 00:28:36.955511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.213 [2024-07-12 00:28:36.955780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.213 [2024-07-12 00:28:36.956507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.213 [2024-07-12 00:28:36.956518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 [2024-07-12 00:28:37.450451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 [2024-07-12 00:28:37.489469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.778 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.036 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.037 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.295 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.295 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.295 00:28:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.295 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.554 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.813 rmmod nvme_tcp 00:09:33.813 rmmod nvme_fabrics 00:09:33.813 rmmod nvme_keyring 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 68370 ']' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 68370 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 68370 ']' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 68370 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68370 00:09:33.813 killing process with pid 68370 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68370' 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 68370 00:09:33.813 00:28:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 68370 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:35.191 00:09:35.191 real 0m4.126s 00:09:35.191 user 0m12.004s 00:09:35.191 sys 0m1.002s 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.191 00:28:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.191 ************************************ 00:09:35.191 END TEST nvmf_referrals 00:09:35.191 ************************************ 00:09:35.191 00:28:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.191 00:28:40 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:35.191 00:28:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.191 00:28:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.191 00:28:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.191 ************************************ 00:09:35.191 START TEST nvmf_connect_disconnect 00:09:35.191 ************************************ 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:35.191 * Looking for test storage... 00:09:35.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.191 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.451 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:35.452 Cannot find device "nvmf_tgt_br" 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.452 Cannot find device "nvmf_tgt_br2" 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:35.452 Cannot find device "nvmf_tgt_br" 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:35.452 Cannot find device "nvmf_tgt_br2" 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:35.452 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:35.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:09:35.711 00:09:35.711 --- 10.0.0.2 ping statistics --- 00:09:35.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.711 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:35.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:35.711 00:09:35.711 --- 10.0.0.3 ping statistics --- 00:09:35.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.711 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:35.711 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:35.712 00:09:35.712 --- 10.0.0.1 ping statistics --- 00:09:35.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.712 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=68682 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 68682 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 68682 ']' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:35.712 00:28:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.712 [2024-07-12 00:28:40.632346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:35.712 [2024-07-12 00:28:40.632512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.970 [2024-07-12 00:28:40.807147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.229 [2024-07-12 00:28:41.139697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.229 [2024-07-12 00:28:41.139773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.229 [2024-07-12 00:28:41.139794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.229 [2024-07-12 00:28:41.139808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.229 [2024-07-12 00:28:41.139820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.229 [2024-07-12 00:28:41.140006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.229 [2024-07-12 00:28:41.140283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.229 [2024-07-12 00:28:41.140888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.229 [2024-07-12 00:28:41.140925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:36.796 [2024-07-12 00:28:41.656970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.796 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.055 [2024-07-12 00:28:41.782652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:37.055 00:28:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:39.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.919 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:22.919 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:22.919 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.919 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.177 rmmod nvme_tcp 00:13:23.177 rmmod nvme_fabrics 00:13:23.177 rmmod nvme_keyring 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 68682 ']' 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 68682 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 68682 ']' 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 68682 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68682 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.177 killing process with pid 68682 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68682' 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 68682 00:13:23.177 00:32:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 68682 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:24.554 00:13:24.554 real 3m49.350s 00:13:24.554 user 14m47.091s 00:13:24.554 sys 0m25.030s 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.554 00:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.554 ************************************ 00:13:24.554 END TEST nvmf_connect_disconnect 00:13:24.554 ************************************ 00:13:24.554 00:32:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.554 00:32:29 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.554 00:32:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.554 00:32:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.554 00:32:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.554 ************************************ 00:13:24.554 START TEST nvmf_multitarget 00:13:24.554 ************************************ 00:13:24.554 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:24.812 * Looking for test storage... 00:13:24.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:24.812 Cannot find device "nvmf_tgt_br" 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.812 Cannot find device "nvmf_tgt_br2" 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:24.812 Cannot find device "nvmf_tgt_br" 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:24.812 Cannot find device "nvmf_tgt_br2" 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:24.812 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:25.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:13:25.070 00:13:25.070 --- 10.0.0.2 ping statistics --- 00:13:25.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.070 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:25.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:25.070 00:13:25.070 --- 10.0.0.3 ping statistics --- 00:13:25.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.070 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:25.070 00:13:25.070 --- 10.0.0.1 ping statistics --- 00:13:25.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.070 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=72474 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 72474 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 72474 ']' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.070 00:32:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:25.327 [2024-07-12 00:32:30.066132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:25.327 [2024-07-12 00:32:30.066314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.327 [2024-07-12 00:32:30.244608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.892 [2024-07-12 00:32:30.522869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.892 [2024-07-12 00:32:30.522955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.892 [2024-07-12 00:32:30.522974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.892 [2024-07-12 00:32:30.522989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.892 [2024-07-12 00:32:30.523002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.892 [2024-07-12 00:32:30.523207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.892 [2024-07-12 00:32:30.523327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.892 [2024-07-12 00:32:30.523876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.892 [2024-07-12 00:32:30.523887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:26.149 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:26.405 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:26.405 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:26.405 "nvmf_tgt_1" 00:13:26.405 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:26.662 "nvmf_tgt_2" 00:13:26.662 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:26.662 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:26.662 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:26.662 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:26.919 true 00:13:26.919 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:26.919 true 00:13:26.919 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:26.919 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:27.176 00:32:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:27.176 rmmod nvme_tcp 00:13:27.176 rmmod nvme_fabrics 00:13:27.176 rmmod nvme_keyring 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 72474 ']' 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 72474 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 72474 ']' 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 72474 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72474 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.176 killing process with pid 72474 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72474' 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 72474 00:13:27.176 00:32:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 72474 00:13:28.551 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.551 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:28.552 00:13:28.552 real 0m3.882s 00:13:28.552 user 0m11.109s 00:13:28.552 sys 0m0.872s 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.552 00:32:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:28.552 ************************************ 00:13:28.552 END TEST nvmf_multitarget 00:13:28.552 ************************************ 00:13:28.552 00:32:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.552 00:32:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:28.552 00:32:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.552 00:32:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.552 00:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.552 ************************************ 00:13:28.552 START TEST nvmf_rpc 00:13:28.552 ************************************ 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:28.552 * Looking for test storage... 00:13:28.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.552 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.553 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:28.810 Cannot find device "nvmf_tgt_br" 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.810 Cannot find device "nvmf_tgt_br2" 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:28.810 Cannot find device "nvmf_tgt_br" 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:28.810 Cannot find device "nvmf_tgt_br2" 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.810 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:28.811 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:29.069 00:13:29.069 --- 10.0.0.2 ping statistics --- 00:13:29.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.069 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:13:29.069 00:13:29.069 --- 10.0.0.3 ping statistics --- 00:13:29.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.069 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:29.069 00:13:29.069 --- 10.0.0.1 ping statistics --- 00:13:29.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.069 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=72710 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 72710 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 72710 ']' 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.069 00:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.069 [2024-07-12 00:32:33.963490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:29.069 [2024-07-12 00:32:33.963654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.327 [2024-07-12 00:32:34.137360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.584 [2024-07-12 00:32:34.431867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.584 [2024-07-12 00:32:34.431927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.584 [2024-07-12 00:32:34.431954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.584 [2024-07-12 00:32:34.431970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.584 [2024-07-12 00:32:34.431983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.584 [2024-07-12 00:32:34.432479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.584 [2024-07-12 00:32:34.432581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.584 [2024-07-12 00:32:34.432723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.584 [2024-07-12 00:32:34.433152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.149 00:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:30.149 "poll_groups": [ 00:13:30.149 { 00:13:30.149 "admin_qpairs": 0, 00:13:30.149 "completed_nvme_io": 0, 00:13:30.149 "current_admin_qpairs": 0, 00:13:30.149 "current_io_qpairs": 0, 00:13:30.149 "io_qpairs": 0, 00:13:30.149 "name": "nvmf_tgt_poll_group_000", 00:13:30.149 "pending_bdev_io": 0, 00:13:30.149 "transports": [] 00:13:30.149 }, 00:13:30.149 { 00:13:30.149 "admin_qpairs": 0, 00:13:30.149 "completed_nvme_io": 0, 00:13:30.149 "current_admin_qpairs": 0, 00:13:30.149 "current_io_qpairs": 0, 00:13:30.149 "io_qpairs": 0, 00:13:30.149 "name": "nvmf_tgt_poll_group_001", 00:13:30.149 "pending_bdev_io": 0, 00:13:30.149 "transports": [] 00:13:30.149 }, 00:13:30.149 { 00:13:30.149 "admin_qpairs": 0, 00:13:30.149 "completed_nvme_io": 0, 00:13:30.149 "current_admin_qpairs": 0, 00:13:30.149 "current_io_qpairs": 0, 00:13:30.149 "io_qpairs": 0, 00:13:30.149 "name": "nvmf_tgt_poll_group_002", 00:13:30.149 "pending_bdev_io": 0, 00:13:30.149 "transports": [] 00:13:30.149 }, 00:13:30.149 { 00:13:30.149 "admin_qpairs": 0, 00:13:30.149 "completed_nvme_io": 0, 00:13:30.149 "current_admin_qpairs": 0, 00:13:30.149 "current_io_qpairs": 0, 00:13:30.149 "io_qpairs": 0, 00:13:30.149 "name": "nvmf_tgt_poll_group_003", 00:13:30.149 "pending_bdev_io": 0, 00:13:30.149 "transports": [] 00:13:30.149 } 00:13:30.149 ], 00:13:30.149 "tick_rate": 2200000000 00:13:30.149 }' 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:30.149 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.408 [2024-07-12 00:32:35.132765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:30.408 "poll_groups": [ 00:13:30.408 { 00:13:30.408 "admin_qpairs": 0, 00:13:30.408 "completed_nvme_io": 0, 00:13:30.408 "current_admin_qpairs": 0, 00:13:30.408 "current_io_qpairs": 0, 00:13:30.408 "io_qpairs": 0, 00:13:30.408 "name": "nvmf_tgt_poll_group_000", 00:13:30.408 "pending_bdev_io": 0, 00:13:30.408 "transports": [ 00:13:30.408 { 00:13:30.408 "trtype": "TCP" 00:13:30.408 } 00:13:30.408 ] 00:13:30.408 }, 00:13:30.408 { 00:13:30.408 "admin_qpairs": 0, 00:13:30.408 "completed_nvme_io": 0, 00:13:30.408 "current_admin_qpairs": 0, 00:13:30.408 "current_io_qpairs": 0, 00:13:30.408 "io_qpairs": 0, 00:13:30.408 "name": "nvmf_tgt_poll_group_001", 00:13:30.408 "pending_bdev_io": 0, 00:13:30.408 "transports": [ 00:13:30.408 { 00:13:30.408 "trtype": "TCP" 00:13:30.408 } 00:13:30.408 ] 00:13:30.408 }, 00:13:30.408 { 00:13:30.408 "admin_qpairs": 0, 00:13:30.408 "completed_nvme_io": 0, 00:13:30.408 "current_admin_qpairs": 0, 00:13:30.408 "current_io_qpairs": 0, 00:13:30.408 "io_qpairs": 0, 00:13:30.408 "name": "nvmf_tgt_poll_group_002", 00:13:30.408 "pending_bdev_io": 0, 00:13:30.408 "transports": [ 00:13:30.408 { 00:13:30.408 "trtype": "TCP" 00:13:30.408 } 00:13:30.408 ] 00:13:30.408 }, 00:13:30.408 { 00:13:30.408 "admin_qpairs": 0, 00:13:30.408 "completed_nvme_io": 0, 00:13:30.408 "current_admin_qpairs": 0, 00:13:30.408 "current_io_qpairs": 0, 00:13:30.408 "io_qpairs": 0, 00:13:30.408 "name": "nvmf_tgt_poll_group_003", 00:13:30.408 "pending_bdev_io": 0, 00:13:30.408 "transports": [ 00:13:30.408 { 00:13:30.408 "trtype": "TCP" 00:13:30.408 } 00:13:30.408 ] 00:13:30.408 } 00:13:30.408 ], 00:13:30.408 "tick_rate": 2200000000 00:13:30.408 }' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.408 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 Malloc1 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 [2024-07-12 00:32:35.380131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.2 -s 4420 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.2 -s 4420 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.2 -s 4420 00:13:30.666 [2024-07-12 00:32:35.409170] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea' 00:13:30.666 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:30.666 could not add new controller: failed to write to nvme-fabrics device 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:30.666 00:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.196 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.197 [2024-07-12 00:32:37.813909] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea' 00:13:33.197 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:33.197 could not add new controller: failed to write to nvme-fabrics device 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:33.197 00:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.098 00:32:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:35.098 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.357 [2024-07-12 00:32:40.118096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.357 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.616 00:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.616 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.616 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.616 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:35.616 00:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.520 [2024-07-12 00:32:42.440453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.520 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:37.779 00:32:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 [2024-07-12 00:32:44.739193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:40.308 00:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:42.263 00:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 [2024-07-12 00:32:47.039591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.263 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.522 00:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.522 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.522 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.522 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.522 00:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.425 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 [2024-07-12 00:32:49.362308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.683 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:44.684 00:32:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 [2024-07-12 00:32:51.769563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 [2024-07-12 00:32:51.817669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 [2024-07-12 00:32:51.865703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.216 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 [2024-07-12 00:32:51.917807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 [2024-07-12 00:32:51.969888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:47.217 "poll_groups": [ 00:13:47.217 { 00:13:47.217 "admin_qpairs": 2, 00:13:47.217 "completed_nvme_io": 115, 00:13:47.217 "current_admin_qpairs": 0, 00:13:47.217 "current_io_qpairs": 0, 00:13:47.217 "io_qpairs": 16, 00:13:47.217 "name": "nvmf_tgt_poll_group_000", 00:13:47.217 "pending_bdev_io": 0, 00:13:47.217 "transports": [ 00:13:47.217 { 00:13:47.217 "trtype": "TCP" 00:13:47.217 } 00:13:47.217 ] 00:13:47.217 }, 00:13:47.217 { 00:13:47.217 "admin_qpairs": 3, 00:13:47.217 "completed_nvme_io": 69, 00:13:47.217 "current_admin_qpairs": 0, 00:13:47.217 "current_io_qpairs": 0, 00:13:47.217 "io_qpairs": 17, 00:13:47.217 "name": "nvmf_tgt_poll_group_001", 00:13:47.217 "pending_bdev_io": 0, 00:13:47.217 "transports": [ 00:13:47.217 { 00:13:47.217 "trtype": "TCP" 00:13:47.217 } 00:13:47.217 ] 00:13:47.217 }, 00:13:47.217 { 00:13:47.217 "admin_qpairs": 1, 00:13:47.217 "completed_nvme_io": 71, 00:13:47.217 "current_admin_qpairs": 0, 00:13:47.217 "current_io_qpairs": 0, 00:13:47.217 "io_qpairs": 19, 00:13:47.217 "name": "nvmf_tgt_poll_group_002", 00:13:47.217 "pending_bdev_io": 0, 00:13:47.217 "transports": [ 00:13:47.217 { 00:13:47.217 "trtype": "TCP" 00:13:47.217 } 00:13:47.217 ] 00:13:47.217 }, 00:13:47.217 { 00:13:47.217 "admin_qpairs": 1, 00:13:47.217 "completed_nvme_io": 165, 00:13:47.217 "current_admin_qpairs": 0, 00:13:47.217 "current_io_qpairs": 0, 00:13:47.217 "io_qpairs": 18, 00:13:47.217 "name": "nvmf_tgt_poll_group_003", 00:13:47.217 "pending_bdev_io": 0, 00:13:47.217 "transports": [ 00:13:47.217 { 00:13:47.217 "trtype": "TCP" 00:13:47.217 } 00:13:47.217 ] 00:13:47.217 } 00:13:47.217 ], 00:13:47.217 "tick_rate": 2200000000 00:13:47.217 }' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.217 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.476 rmmod nvme_tcp 00:13:47.476 rmmod nvme_fabrics 00:13:47.476 rmmod nvme_keyring 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 72710 ']' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 72710 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 72710 ']' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 72710 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72710 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.476 killing process with pid 72710 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72710' 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 72710 00:13:47.476 00:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 72710 00:13:48.851 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.851 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.851 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.851 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.852 00:13:48.852 real 0m20.324s 00:13:48.852 user 1m14.523s 00:13:48.852 sys 0m2.798s 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.852 00:32:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.852 ************************************ 00:13:48.852 END TEST nvmf_rpc 00:13:48.852 ************************************ 00:13:48.852 00:32:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:48.852 00:32:53 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:48.852 00:32:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.852 00:32:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.852 00:32:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.852 ************************************ 00:13:48.852 START TEST nvmf_invalid 00:13:48.852 ************************************ 00:13:48.852 00:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:49.111 * Looking for test storage... 00:13:49.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:49.111 Cannot find device "nvmf_tgt_br" 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.111 Cannot find device "nvmf_tgt_br2" 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:49.111 Cannot find device "nvmf_tgt_br" 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:49.111 Cannot find device "nvmf_tgt_br2" 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:49.111 00:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.111 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:13:49.370 00:13:49.370 --- 10.0.0.2 ping statistics --- 00:13:49.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.370 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:49.370 00:13:49.370 --- 10.0.0.3 ping statistics --- 00:13:49.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.370 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:49.370 00:13:49.370 --- 10.0.0.1 ping statistics --- 00:13:49.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.370 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=73237 00:13:49.370 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 73237 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 73237 ']' 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.371 00:32:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:49.630 [2024-07-12 00:32:54.372009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:49.630 [2024-07-12 00:32:54.372186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.630 [2024-07-12 00:32:54.557494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.197 [2024-07-12 00:32:54.839977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.197 [2024-07-12 00:32:54.840063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.197 [2024-07-12 00:32:54.840080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.197 [2024-07-12 00:32:54.840096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.197 [2024-07-12 00:32:54.840107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.197 [2024-07-12 00:32:54.840313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.197 [2024-07-12 00:32:54.840479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.197 [2024-07-12 00:32:54.841091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.197 [2024-07-12 00:32:54.841109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.454 00:32:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.455 00:32:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:50.455 00:32:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.455 00:32:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.455 00:32:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 00:32:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.713 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:50.713 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26643 00:13:50.970 [2024-07-12 00:32:55.675908] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:50.970 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/12 00:32:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26643 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:50.970 request: 00:13:50.970 { 00:13:50.970 "method": "nvmf_create_subsystem", 00:13:50.970 "params": { 00:13:50.970 "nqn": "nqn.2016-06.io.spdk:cnode26643", 00:13:50.970 "tgt_name": "foobar" 00:13:50.970 } 00:13:50.970 } 00:13:50.970 Got JSON-RPC error response 00:13:50.970 GoRPCClient: error on JSON-RPC call' 00:13:50.970 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/12 00:32:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26643 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:50.970 request: 00:13:50.970 { 00:13:50.970 "method": "nvmf_create_subsystem", 00:13:50.970 "params": { 00:13:50.970 "nqn": "nqn.2016-06.io.spdk:cnode26643", 00:13:50.970 "tgt_name": "foobar" 00:13:50.970 } 00:13:50.970 } 00:13:50.970 Got JSON-RPC error response 00:13:50.970 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:50.970 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:50.970 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5337 00:13:51.229 [2024-07-12 00:32:55.916228] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5337: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:51.229 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/12 00:32:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5337 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:51.229 request: 00:13:51.229 { 00:13:51.229 "method": "nvmf_create_subsystem", 00:13:51.229 "params": { 00:13:51.229 "nqn": "nqn.2016-06.io.spdk:cnode5337", 00:13:51.229 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:51.229 } 00:13:51.229 } 00:13:51.229 Got JSON-RPC error response 00:13:51.229 GoRPCClient: error on JSON-RPC call' 00:13:51.229 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/12 00:32:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5337 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:51.229 request: 00:13:51.229 { 00:13:51.229 "method": "nvmf_create_subsystem", 00:13:51.229 "params": { 00:13:51.229 "nqn": "nqn.2016-06.io.spdk:cnode5337", 00:13:51.229 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:51.229 } 00:13:51.229 } 00:13:51.229 Got JSON-RPC error response 00:13:51.229 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:51.229 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:51.229 00:32:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23987 00:13:51.488 [2024-07-12 00:32:56.172476] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23987: invalid model number 'SPDK_Controller' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/12 00:32:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23987], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:51.488 request: 00:13:51.488 { 00:13:51.488 "method": "nvmf_create_subsystem", 00:13:51.488 "params": { 00:13:51.488 "nqn": "nqn.2016-06.io.spdk:cnode23987", 00:13:51.488 "model_number": "SPDK_Controller\u001f" 00:13:51.488 } 00:13:51.488 } 00:13:51.488 Got JSON-RPC error response 00:13:51.488 GoRPCClient: error on JSON-RPC call' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/12 00:32:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23987], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:51.488 request: 00:13:51.488 { 00:13:51.488 "method": "nvmf_create_subsystem", 00:13:51.488 "params": { 00:13:51.488 "nqn": "nqn.2016-06.io.spdk:cnode23987", 00:13:51.488 "model_number": "SPDK_Controller\u001f" 00:13:51.488 } 00:13:51.488 } 00:13:51.488 Got JSON-RPC error response 00:13:51.488 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:51.488 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'KFy9Vo2 rhC1/Ewav}qX' 00:13:51.489 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'KFy9Vo2 rhC1/Ewav}qX' nqn.2016-06.io.spdk:cnode2488 00:13:51.749 [2024-07-12 00:32:56.504735] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2488: invalid serial number 'KFy9Vo2 rhC1/Ewav}qX' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/12 00:32:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2488 serial_number:KFy9Vo2 rhC1/Ewav}qX], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN KFy9Vo2 rhC1/Ewav}qX 00:13:51.749 request: 00:13:51.749 { 00:13:51.749 "method": "nvmf_create_subsystem", 00:13:51.749 "params": { 00:13:51.749 "nqn": "nqn.2016-06.io.spdk:cnode2488", 00:13:51.749 "serial_number": "KFy9V\u007fo2 rhC1/Ewav}qX" 00:13:51.749 } 00:13:51.749 } 00:13:51.749 Got JSON-RPC error response 00:13:51.749 GoRPCClient: error on JSON-RPC call' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/12 00:32:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2488 serial_number:KFy9Vo2 rhC1/Ewav}qX], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN KFy9Vo2 rhC1/Ewav}qX 00:13:51.749 request: 00:13:51.749 { 00:13:51.749 "method": "nvmf_create_subsystem", 00:13:51.749 "params": { 00:13:51.749 "nqn": "nqn.2016-06.io.spdk:cnode2488", 00:13:51.749 "serial_number": "KFy9V\u007fo2 rhC1/Ewav}qX" 00:13:51.749 } 00:13:51.749 } 00:13:51.749 Got JSON-RPC error response 00:13:51.749 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.749 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.750 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:52.010 00:32:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'l+>m\fESD?z)H$$roE,Z4m\fESD?z)H$$roE,Z4m\fESD?z)H$$roE,Z4m\fESD?z)H$$roE,Z4m\fESD?z)H$$roE,Z4m\\fESD?z)H$$roE,Z4 /dev/null' 00:13:55.970 00:33:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.970 00:33:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:55.970 ************************************ 00:13:55.970 END TEST nvmf_invalid 00:13:55.970 ************************************ 00:13:55.970 00:13:55.970 real 0m6.784s 00:13:55.970 user 0m24.868s 00:13:55.970 sys 0m1.448s 00:13:55.970 00:33:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.970 00:33:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:55.970 00:33:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.970 00:33:00 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:55.970 00:33:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.970 00:33:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.970 00:33:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.970 ************************************ 00:13:55.970 START TEST nvmf_abort 00:13:55.970 ************************************ 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:55.970 * Looking for test storage... 00:13:55.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:55.970 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:55.971 Cannot find device "nvmf_tgt_br" 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.971 Cannot find device "nvmf_tgt_br2" 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:55.971 Cannot find device "nvmf_tgt_br" 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:55.971 Cannot find device "nvmf_tgt_br2" 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:55.971 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.229 00:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.229 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.229 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.229 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:56.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:56.229 00:13:56.229 --- 10.0.0.2 ping statistics --- 00:13:56.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.229 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:56.229 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:56.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:56.229 00:13:56.229 --- 10.0.0.3 ping statistics --- 00:13:56.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.229 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:56.229 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:56.230 00:13:56.230 --- 10.0.0.1 ping statistics --- 00:13:56.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.230 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=73761 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 73761 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 73761 ']' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.230 00:33:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:56.487 [2024-07-12 00:33:01.166751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:56.487 [2024-07-12 00:33:01.166955] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.487 [2024-07-12 00:33:01.333763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.745 [2024-07-12 00:33:01.582323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.745 [2024-07-12 00:33:01.582403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.745 [2024-07-12 00:33:01.582432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.745 [2024-07-12 00:33:01.582448] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.745 [2024-07-12 00:33:01.582459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.745 [2024-07-12 00:33:01.582694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.745 [2024-07-12 00:33:01.582848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.745 [2024-07-12 00:33:01.582860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.311 [2024-07-12 00:33:02.212533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.311 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 Malloc0 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 Delay0 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 [2024-07-12 00:33:02.337196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.570 00:33:02 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:57.828 [2024-07-12 00:33:02.590466] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:59.723 Initializing NVMe Controllers 00:13:59.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:59.723 controller IO queue size 128 less than required 00:13:59.723 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:59.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:59.723 Initialization complete. Launching workers. 00:13:59.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27794 00:13:59.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27855, failed to submit 66 00:13:59.723 success 27794, unsuccess 61, failed 0 00:13:59.723 00:33:04 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.723 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.723 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.981 rmmod nvme_tcp 00:13:59.981 rmmod nvme_fabrics 00:13:59.981 rmmod nvme_keyring 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 73761 ']' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 73761 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 73761 ']' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 73761 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73761 00:13:59.981 killing process with pid 73761 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73761' 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 73761 00:13:59.981 00:33:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 73761 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.356 00:33:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:01.356 00:14:01.356 real 0m5.519s 00:14:01.356 user 0m14.959s 00:14:01.356 sys 0m1.162s 00:14:01.357 00:33:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.357 ************************************ 00:14:01.357 END TEST nvmf_abort 00:14:01.357 ************************************ 00:14:01.357 00:33:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:01.357 00:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:01.357 00:33:06 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:01.357 00:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.357 00:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.357 00:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.357 ************************************ 00:14:01.357 START TEST nvmf_ns_hotplug_stress 00:14:01.357 ************************************ 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:01.357 * Looking for test storage... 00:14:01.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:01.357 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:01.616 Cannot find device "nvmf_tgt_br" 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:01.616 Cannot find device "nvmf_tgt_br2" 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:01.616 Cannot find device "nvmf_tgt_br" 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:01.616 Cannot find device "nvmf_tgt_br2" 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:01.616 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.874 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.874 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.874 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.874 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.874 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:01.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:14:01.875 00:14:01.875 --- 10.0.0.2 ping statistics --- 00:14:01.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.875 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:01.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:01.875 00:14:01.875 --- 10.0.0.3 ping statistics --- 00:14:01.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.875 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:01.875 00:14:01.875 --- 10.0.0.1 ping statistics --- 00:14:01.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.875 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=74050 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 74050 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 74050 ']' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.875 00:33:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.875 [2024-07-12 00:33:06.774076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:01.875 [2024-07-12 00:33:06.774283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.132 [2024-07-12 00:33:06.954441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.389 [2024-07-12 00:33:07.163153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.389 [2024-07-12 00:33:07.163235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.389 [2024-07-12 00:33:07.163252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.390 [2024-07-12 00:33:07.163266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.390 [2024-07-12 00:33:07.163276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.390 [2024-07-12 00:33:07.163501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.390 [2024-07-12 00:33:07.164491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.390 [2024-07-12 00:33:07.164510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:02.956 00:33:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.213 [2024-07-12 00:33:08.006339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.213 00:33:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.523 00:33:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.781 [2024-07-12 00:33:08.535296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.781 00:33:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.039 00:33:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:04.297 Malloc0 00:14:04.298 00:33:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:04.555 Delay0 00:14:04.555 00:33:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.813 00:33:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:05.071 NULL1 00:14:05.071 00:33:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.329 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=74181 00:14:05.329 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:05.329 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:05.329 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.587 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.845 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:05.845 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:06.103 true 00:14:06.103 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:06.103 00:33:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.361 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.619 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:06.619 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:06.877 true 00:14:06.877 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:06.877 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.136 00:33:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.395 00:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:07.395 00:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:07.654 true 00:14:07.654 00:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:07.654 00:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.913 00:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.171 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:08.171 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:08.429 true 00:14:08.429 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:08.429 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.993 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.993 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:08.993 00:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:09.250 true 00:14:09.250 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:09.250 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.507 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.764 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:09.764 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:10.022 true 00:14:10.022 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:10.022 00:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.588 00:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.588 00:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:10.588 00:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:10.846 true 00:14:10.846 00:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:10.846 00:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.413 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.413 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:11.413 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:11.671 true 00:14:11.671 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:11.671 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.929 00:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.187 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:12.187 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:12.444 true 00:14:12.444 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:12.444 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.702 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.267 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:13.267 00:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:13.267 true 00:14:13.267 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:13.267 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.525 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.784 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:13.784 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:14.042 true 00:14:14.042 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:14.042 00:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.299 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.558 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:14.558 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:14.816 true 00:14:14.816 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:14.816 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.074 00:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.331 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:15.331 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:15.588 true 00:14:15.588 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:15.588 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.846 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.104 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:16.104 00:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:16.363 true 00:14:16.363 00:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:16.363 00:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.622 00:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.188 00:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:17.188 00:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:17.188 true 00:14:17.188 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:17.188 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.461 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.723 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:17.723 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:17.981 true 00:14:17.981 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:17.981 00:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.239 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.496 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:18.496 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:18.753 true 00:14:18.753 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:18.753 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.011 00:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.269 00:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:19.269 00:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:19.835 true 00:14:19.835 00:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:19.835 00:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.835 00:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.094 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:20.094 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:20.352 true 00:14:20.352 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:20.352 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.610 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.988 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:20.988 00:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:21.245 true 00:14:21.245 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:21.245 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.503 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.761 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:21.761 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:22.020 true 00:14:22.020 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:22.020 00:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.278 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.537 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:22.537 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:22.795 true 00:14:22.795 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:22.795 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.054 00:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.313 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:23.313 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:23.572 true 00:14:23.572 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:23.572 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.873 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.132 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:24.132 00:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:24.392 true 00:14:24.392 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:24.392 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.650 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.908 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:24.908 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:25.166 true 00:14:25.166 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:25.166 00:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.424 00:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.683 00:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:25.683 00:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:25.941 true 00:14:25.941 00:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:25.941 00:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.199 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.457 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:26.457 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:26.716 true 00:14:26.716 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:26.716 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.974 00:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.232 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:27.232 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:27.490 true 00:14:27.490 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:27.490 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.748 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.007 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:28.007 00:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:28.264 true 00:14:28.264 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:28.264 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.522 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.780 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:28.780 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:29.038 true 00:14:29.038 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:29.038 00:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.296 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.554 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:29.554 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:29.813 true 00:14:29.813 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:29.813 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.071 00:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.329 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:30.329 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:30.587 true 00:14:30.587 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:30.587 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.846 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.103 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:31.103 00:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:31.360 true 00:14:31.360 00:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:31.360 00:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.618 00:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.875 00:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:31.875 00:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:32.133 true 00:14:32.390 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:32.390 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.390 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.648 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:32.648 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:32.907 true 00:14:32.907 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:32.907 00:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.165 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.730 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:33.730 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:33.730 true 00:14:33.730 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:33.730 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.988 00:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.246 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:34.246 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:34.504 true 00:14:34.504 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:34.504 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.761 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.018 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:35.018 00:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:35.276 true 00:14:35.276 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:35.276 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.533 Initializing NVMe Controllers 00:14:35.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.533 Controller IO queue size 128, less than required. 00:14:35.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.533 Initialization complete. Launching workers. 00:14:35.533 ======================================================== 00:14:35.533 Latency(us) 00:14:35.534 Device Information : IOPS MiB/s Average min max 00:14:35.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16723.47 8.17 7654.23 5060.72 20975.45 00:14:35.534 ======================================================== 00:14:35.534 Total : 16723.47 8.17 7654.23 5060.72 20975.45 00:14:35.534 00:14:35.534 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.791 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:35.791 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:36.073 true 00:14:36.073 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 74181 00:14:36.073 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (74181) - No such process 00:14:36.073 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 74181 00:14:36.073 00:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.329 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:36.586 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:36.586 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:36.586 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:36.586 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.586 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:36.843 null0 00:14:36.843 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.843 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.843 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:37.101 null1 00:14:37.101 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.101 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.101 00:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:37.358 null2 00:14:37.358 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.358 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.358 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:37.617 null3 00:14:37.617 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.617 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.617 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:37.874 null4 00:14:37.874 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.874 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.874 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:38.131 null5 00:14:38.131 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:38.131 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:38.131 00:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:38.388 null6 00:14:38.388 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:38.388 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:38.388 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:38.646 null7 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 75417 75419 75421 75423 75424 75426 75429 75430 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.646 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:38.903 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.161 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.418 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.677 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.935 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.193 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.451 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.709 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.710 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.968 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.227 00:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.227 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.485 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.485 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.486 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.744 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.003 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.261 00:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.261 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.519 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.778 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.036 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.037 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.037 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.037 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.037 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.037 00:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:43.295 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.554 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.813 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.072 00:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.330 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.331 rmmod nvme_tcp 00:14:44.331 rmmod nvme_fabrics 00:14:44.331 rmmod nvme_keyring 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 74050 ']' 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 74050 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 74050 ']' 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 74050 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74050 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74050' 00:14:44.331 killing process with pid 74050 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 74050 00:14:44.331 00:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 74050 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.731 00:14:45.731 real 0m44.367s 00:14:45.731 user 3m35.221s 00:14:45.731 sys 0m14.019s 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.731 00:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.731 ************************************ 00:14:45.731 END TEST nvmf_ns_hotplug_stress 00:14:45.731 ************************************ 00:14:45.731 00:33:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:45.731 00:33:50 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:45.731 00:33:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:45.731 00:33:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.731 00:33:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.731 ************************************ 00:14:45.731 START TEST nvmf_connect_stress 00:14:45.731 ************************************ 00:14:45.731 00:33:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:45.731 * Looking for test storage... 00:14:45.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.990 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.991 Cannot find device "nvmf_tgt_br" 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.991 Cannot find device "nvmf_tgt_br2" 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.991 Cannot find device "nvmf_tgt_br" 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.991 Cannot find device "nvmf_tgt_br2" 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.991 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.250 00:33:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:46.250 00:14:46.250 --- 10.0.0.2 ping statistics --- 00:14:46.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.250 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:14:46.250 00:14:46.250 --- 10.0.0.3 ping statistics --- 00:14:46.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.250 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:46.250 00:14:46.250 --- 10.0.0.1 ping statistics --- 00:14:46.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.250 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=76729 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 76729 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 76729 ']' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.250 00:33:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.250 [2024-07-12 00:33:51.167694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:46.250 [2024-07-12 00:33:51.167896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.510 [2024-07-12 00:33:51.341203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.774 [2024-07-12 00:33:51.641242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.774 [2024-07-12 00:33:51.641315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.774 [2024-07-12 00:33:51.641355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.774 [2024-07-12 00:33:51.641373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.774 [2024-07-12 00:33:51.641388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.774 [2024-07-12 00:33:51.641939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.774 [2024-07-12 00:33:51.642128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.774 [2024-07-12 00:33:51.642138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 [2024-07-12 00:33:52.137510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 [2024-07-12 00:33:52.162783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.342 NULL1 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=76781 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.342 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.343 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.909 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.909 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:47.909 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.909 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.909 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.167 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.167 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:48.167 00:33:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.167 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.167 00:33:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.423 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.423 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:48.423 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.423 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.423 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.680 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.680 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:48.680 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.680 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.680 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.244 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.244 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:49.244 00:33:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.244 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.244 00:33:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.502 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.502 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:49.502 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.502 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.502 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.759 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.759 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:49.760 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.760 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.760 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.017 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.017 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:50.017 00:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.017 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.017 00:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.274 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.274 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:50.274 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.274 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.274 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.859 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.859 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:50.860 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.860 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.860 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.118 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.118 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:51.118 00:33:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.118 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.118 00:33:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.376 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:51.376 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.376 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.376 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.634 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.634 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:51.634 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.634 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.634 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.893 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.893 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:51.893 00:33:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.893 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.893 00:33:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.459 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.459 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:52.459 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.459 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.459 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.717 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.717 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:52.717 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.717 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.717 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.975 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.975 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:52.975 00:33:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.975 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.975 00:33:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.234 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.234 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:53.234 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.234 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.234 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.800 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.800 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:53.800 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.800 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.800 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.058 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.058 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:54.058 00:33:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.058 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.058 00:33:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.316 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.316 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:54.316 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.316 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.316 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.576 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.576 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:54.576 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.576 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.576 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.142 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.142 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:55.142 00:33:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.142 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.142 00:33:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.399 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.399 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:55.399 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.399 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.399 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.658 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.658 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:55.658 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.658 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.658 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.916 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.916 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:55.916 00:34:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.916 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.916 00:34:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.174 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.174 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:56.174 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.174 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.174 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.741 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:56.741 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.741 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.741 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.999 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.999 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:56.999 00:34:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.999 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.999 00:34:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.257 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.257 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:57.257 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.257 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.257 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.515 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.515 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:57.515 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:57.515 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.515 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.774 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 76781 00:14:57.774 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (76781) - No such process 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 76781 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.774 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.033 rmmod nvme_tcp 00:14:58.033 rmmod nvme_fabrics 00:14:58.033 rmmod nvme_keyring 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 76729 ']' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 76729 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 76729 ']' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 76729 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76729 00:14:58.033 killing process with pid 76729 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76729' 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 76729 00:14:58.033 00:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 76729 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:59.433 00:14:59.433 real 0m13.480s 00:14:59.433 user 0m43.090s 00:14:59.433 sys 0m3.572s 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.433 00:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.433 ************************************ 00:14:59.433 END TEST nvmf_connect_stress 00:14:59.433 ************************************ 00:14:59.433 00:34:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.433 00:34:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:59.433 00:34:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.433 00:34:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.433 00:34:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.433 ************************************ 00:14:59.433 START TEST nvmf_fused_ordering 00:14:59.433 ************************************ 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:59.433 * Looking for test storage... 00:14:59.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:59.433 Cannot find device "nvmf_tgt_br" 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.433 Cannot find device "nvmf_tgt_br2" 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:14:59.433 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:59.434 Cannot find device "nvmf_tgt_br" 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:59.434 Cannot find device "nvmf_tgt_br2" 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:59.434 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:59.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:59.701 00:14:59.701 --- 10.0.0.2 ping statistics --- 00:14:59.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.701 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:59.701 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:59.701 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:59.701 00:14:59.701 --- 10.0.0.3 ping statistics --- 00:14:59.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.701 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:59.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:59.701 00:14:59.701 --- 10.0.0.1 ping statistics --- 00:14:59.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.701 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=77122 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 77122 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 77122 ']' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.701 00:34:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.958 [2024-07-12 00:34:04.707045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:59.958 [2024-07-12 00:34:04.707213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.959 [2024-07-12 00:34:04.887835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.524 [2024-07-12 00:34:05.173513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.524 [2024-07-12 00:34:05.173598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.524 [2024-07-12 00:34:05.173614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.524 [2024-07-12 00:34:05.173629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.524 [2024-07-12 00:34:05.173640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.524 [2024-07-12 00:34:05.173682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.782 [2024-07-12 00:34:05.697047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.782 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:00.782 [2024-07-12 00:34:05.713170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:01.041 NULL1 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.041 00:34:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:01.041 [2024-07-12 00:34:05.807970] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:01.041 [2024-07-12 00:34:05.808110] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77176 ] 00:15:01.607 Attached to nqn.2016-06.io.spdk:cnode1 00:15:01.607 Namespace ID: 1 size: 1GB 00:15:01.607 fused_ordering(0) 00:15:01.607 fused_ordering(1) 00:15:01.607 fused_ordering(2) 00:15:01.607 fused_ordering(3) 00:15:01.607 fused_ordering(4) 00:15:01.607 fused_ordering(5) 00:15:01.607 fused_ordering(6) 00:15:01.607 fused_ordering(7) 00:15:01.607 fused_ordering(8) 00:15:01.607 fused_ordering(9) 00:15:01.607 fused_ordering(10) 00:15:01.607 fused_ordering(11) 00:15:01.607 fused_ordering(12) 00:15:01.607 fused_ordering(13) 00:15:01.607 fused_ordering(14) 00:15:01.607 fused_ordering(15) 00:15:01.607 fused_ordering(16) 00:15:01.607 fused_ordering(17) 00:15:01.607 fused_ordering(18) 00:15:01.607 fused_ordering(19) 00:15:01.607 fused_ordering(20) 00:15:01.607 fused_ordering(21) 00:15:01.607 fused_ordering(22) 00:15:01.607 fused_ordering(23) 00:15:01.607 fused_ordering(24) 00:15:01.607 fused_ordering(25) 00:15:01.607 fused_ordering(26) 00:15:01.607 fused_ordering(27) 00:15:01.607 fused_ordering(28) 00:15:01.607 fused_ordering(29) 00:15:01.607 fused_ordering(30) 00:15:01.607 fused_ordering(31) 00:15:01.607 fused_ordering(32) 00:15:01.607 fused_ordering(33) 00:15:01.607 fused_ordering(34) 00:15:01.607 fused_ordering(35) 00:15:01.607 fused_ordering(36) 00:15:01.607 fused_ordering(37) 00:15:01.607 fused_ordering(38) 00:15:01.607 fused_ordering(39) 00:15:01.607 fused_ordering(40) 00:15:01.607 fused_ordering(41) 00:15:01.607 fused_ordering(42) 00:15:01.607 fused_ordering(43) 00:15:01.607 fused_ordering(44) 00:15:01.607 fused_ordering(45) 00:15:01.607 fused_ordering(46) 00:15:01.607 fused_ordering(47) 00:15:01.607 fused_ordering(48) 00:15:01.607 fused_ordering(49) 00:15:01.607 fused_ordering(50) 00:15:01.607 fused_ordering(51) 00:15:01.607 fused_ordering(52) 00:15:01.607 fused_ordering(53) 00:15:01.607 fused_ordering(54) 00:15:01.607 fused_ordering(55) 00:15:01.607 fused_ordering(56) 00:15:01.607 fused_ordering(57) 00:15:01.607 fused_ordering(58) 00:15:01.607 fused_ordering(59) 00:15:01.607 fused_ordering(60) 00:15:01.607 fused_ordering(61) 00:15:01.607 fused_ordering(62) 00:15:01.607 fused_ordering(63) 00:15:01.607 fused_ordering(64) 00:15:01.607 fused_ordering(65) 00:15:01.607 fused_ordering(66) 00:15:01.607 fused_ordering(67) 00:15:01.607 fused_ordering(68) 00:15:01.607 fused_ordering(69) 00:15:01.607 fused_ordering(70) 00:15:01.607 fused_ordering(71) 00:15:01.607 fused_ordering(72) 00:15:01.607 fused_ordering(73) 00:15:01.607 fused_ordering(74) 00:15:01.607 fused_ordering(75) 00:15:01.607 fused_ordering(76) 00:15:01.607 fused_ordering(77) 00:15:01.607 fused_ordering(78) 00:15:01.607 fused_ordering(79) 00:15:01.607 fused_ordering(80) 00:15:01.607 fused_ordering(81) 00:15:01.607 fused_ordering(82) 00:15:01.607 fused_ordering(83) 00:15:01.607 fused_ordering(84) 00:15:01.607 fused_ordering(85) 00:15:01.607 fused_ordering(86) 00:15:01.607 fused_ordering(87) 00:15:01.607 fused_ordering(88) 00:15:01.607 fused_ordering(89) 00:15:01.607 fused_ordering(90) 00:15:01.607 fused_ordering(91) 00:15:01.607 fused_ordering(92) 00:15:01.607 fused_ordering(93) 00:15:01.607 fused_ordering(94) 00:15:01.607 fused_ordering(95) 00:15:01.607 fused_ordering(96) 00:15:01.607 fused_ordering(97) 00:15:01.607 fused_ordering(98) 00:15:01.607 fused_ordering(99) 00:15:01.607 fused_ordering(100) 00:15:01.607 fused_ordering(101) 00:15:01.607 fused_ordering(102) 00:15:01.607 fused_ordering(103) 00:15:01.607 fused_ordering(104) 00:15:01.607 fused_ordering(105) 00:15:01.607 fused_ordering(106) 00:15:01.607 fused_ordering(107) 00:15:01.607 fused_ordering(108) 00:15:01.607 fused_ordering(109) 00:15:01.607 fused_ordering(110) 00:15:01.607 fused_ordering(111) 00:15:01.607 fused_ordering(112) 00:15:01.607 fused_ordering(113) 00:15:01.607 fused_ordering(114) 00:15:01.607 fused_ordering(115) 00:15:01.607 fused_ordering(116) 00:15:01.607 fused_ordering(117) 00:15:01.607 fused_ordering(118) 00:15:01.607 fused_ordering(119) 00:15:01.607 fused_ordering(120) 00:15:01.607 fused_ordering(121) 00:15:01.607 fused_ordering(122) 00:15:01.607 fused_ordering(123) 00:15:01.607 fused_ordering(124) 00:15:01.607 fused_ordering(125) 00:15:01.607 fused_ordering(126) 00:15:01.607 fused_ordering(127) 00:15:01.607 fused_ordering(128) 00:15:01.607 fused_ordering(129) 00:15:01.607 fused_ordering(130) 00:15:01.607 fused_ordering(131) 00:15:01.607 fused_ordering(132) 00:15:01.607 fused_ordering(133) 00:15:01.607 fused_ordering(134) 00:15:01.607 fused_ordering(135) 00:15:01.607 fused_ordering(136) 00:15:01.607 fused_ordering(137) 00:15:01.607 fused_ordering(138) 00:15:01.607 fused_ordering(139) 00:15:01.607 fused_ordering(140) 00:15:01.607 fused_ordering(141) 00:15:01.607 fused_ordering(142) 00:15:01.607 fused_ordering(143) 00:15:01.607 fused_ordering(144) 00:15:01.607 fused_ordering(145) 00:15:01.607 fused_ordering(146) 00:15:01.607 fused_ordering(147) 00:15:01.607 fused_ordering(148) 00:15:01.607 fused_ordering(149) 00:15:01.607 fused_ordering(150) 00:15:01.607 fused_ordering(151) 00:15:01.607 fused_ordering(152) 00:15:01.607 fused_ordering(153) 00:15:01.607 fused_ordering(154) 00:15:01.607 fused_ordering(155) 00:15:01.607 fused_ordering(156) 00:15:01.607 fused_ordering(157) 00:15:01.607 fused_ordering(158) 00:15:01.607 fused_ordering(159) 00:15:01.607 fused_ordering(160) 00:15:01.607 fused_ordering(161) 00:15:01.607 fused_ordering(162) 00:15:01.607 fused_ordering(163) 00:15:01.607 fused_ordering(164) 00:15:01.607 fused_ordering(165) 00:15:01.607 fused_ordering(166) 00:15:01.607 fused_ordering(167) 00:15:01.607 fused_ordering(168) 00:15:01.607 fused_ordering(169) 00:15:01.607 fused_ordering(170) 00:15:01.607 fused_ordering(171) 00:15:01.607 fused_ordering(172) 00:15:01.607 fused_ordering(173) 00:15:01.607 fused_ordering(174) 00:15:01.607 fused_ordering(175) 00:15:01.607 fused_ordering(176) 00:15:01.607 fused_ordering(177) 00:15:01.607 fused_ordering(178) 00:15:01.607 fused_ordering(179) 00:15:01.607 fused_ordering(180) 00:15:01.607 fused_ordering(181) 00:15:01.607 fused_ordering(182) 00:15:01.607 fused_ordering(183) 00:15:01.607 fused_ordering(184) 00:15:01.607 fused_ordering(185) 00:15:01.607 fused_ordering(186) 00:15:01.607 fused_ordering(187) 00:15:01.607 fused_ordering(188) 00:15:01.607 fused_ordering(189) 00:15:01.607 fused_ordering(190) 00:15:01.607 fused_ordering(191) 00:15:01.607 fused_ordering(192) 00:15:01.607 fused_ordering(193) 00:15:01.607 fused_ordering(194) 00:15:01.607 fused_ordering(195) 00:15:01.607 fused_ordering(196) 00:15:01.607 fused_ordering(197) 00:15:01.607 fused_ordering(198) 00:15:01.607 fused_ordering(199) 00:15:01.607 fused_ordering(200) 00:15:01.607 fused_ordering(201) 00:15:01.607 fused_ordering(202) 00:15:01.607 fused_ordering(203) 00:15:01.607 fused_ordering(204) 00:15:01.607 fused_ordering(205) 00:15:01.866 fused_ordering(206) 00:15:01.866 fused_ordering(207) 00:15:01.866 fused_ordering(208) 00:15:01.866 fused_ordering(209) 00:15:01.866 fused_ordering(210) 00:15:01.866 fused_ordering(211) 00:15:01.866 fused_ordering(212) 00:15:01.866 fused_ordering(213) 00:15:01.866 fused_ordering(214) 00:15:01.866 fused_ordering(215) 00:15:01.866 fused_ordering(216) 00:15:01.866 fused_ordering(217) 00:15:01.867 fused_ordering(218) 00:15:01.867 fused_ordering(219) 00:15:01.867 fused_ordering(220) 00:15:01.867 fused_ordering(221) 00:15:01.867 fused_ordering(222) 00:15:01.867 fused_ordering(223) 00:15:01.867 fused_ordering(224) 00:15:01.867 fused_ordering(225) 00:15:01.867 fused_ordering(226) 00:15:01.867 fused_ordering(227) 00:15:01.867 fused_ordering(228) 00:15:01.867 fused_ordering(229) 00:15:01.867 fused_ordering(230) 00:15:01.867 fused_ordering(231) 00:15:01.867 fused_ordering(232) 00:15:01.867 fused_ordering(233) 00:15:01.867 fused_ordering(234) 00:15:01.867 fused_ordering(235) 00:15:01.867 fused_ordering(236) 00:15:01.867 fused_ordering(237) 00:15:01.867 fused_ordering(238) 00:15:01.867 fused_ordering(239) 00:15:01.867 fused_ordering(240) 00:15:01.867 fused_ordering(241) 00:15:01.867 fused_ordering(242) 00:15:01.867 fused_ordering(243) 00:15:01.867 fused_ordering(244) 00:15:01.867 fused_ordering(245) 00:15:01.867 fused_ordering(246) 00:15:01.867 fused_ordering(247) 00:15:01.867 fused_ordering(248) 00:15:01.867 fused_ordering(249) 00:15:01.867 fused_ordering(250) 00:15:01.867 fused_ordering(251) 00:15:01.867 fused_ordering(252) 00:15:01.867 fused_ordering(253) 00:15:01.867 fused_ordering(254) 00:15:01.867 fused_ordering(255) 00:15:01.867 fused_ordering(256) 00:15:01.867 fused_ordering(257) 00:15:01.867 fused_ordering(258) 00:15:01.867 fused_ordering(259) 00:15:01.867 fused_ordering(260) 00:15:01.867 fused_ordering(261) 00:15:01.867 fused_ordering(262) 00:15:01.867 fused_ordering(263) 00:15:01.867 fused_ordering(264) 00:15:01.867 fused_ordering(265) 00:15:01.867 fused_ordering(266) 00:15:01.867 fused_ordering(267) 00:15:01.867 fused_ordering(268) 00:15:01.867 fused_ordering(269) 00:15:01.867 fused_ordering(270) 00:15:01.867 fused_ordering(271) 00:15:01.867 fused_ordering(272) 00:15:01.867 fused_ordering(273) 00:15:01.867 fused_ordering(274) 00:15:01.867 fused_ordering(275) 00:15:01.867 fused_ordering(276) 00:15:01.867 fused_ordering(277) 00:15:01.867 fused_ordering(278) 00:15:01.867 fused_ordering(279) 00:15:01.867 fused_ordering(280) 00:15:01.867 fused_ordering(281) 00:15:01.867 fused_ordering(282) 00:15:01.867 fused_ordering(283) 00:15:01.867 fused_ordering(284) 00:15:01.867 fused_ordering(285) 00:15:01.867 fused_ordering(286) 00:15:01.867 fused_ordering(287) 00:15:01.867 fused_ordering(288) 00:15:01.867 fused_ordering(289) 00:15:01.867 fused_ordering(290) 00:15:01.867 fused_ordering(291) 00:15:01.867 fused_ordering(292) 00:15:01.867 fused_ordering(293) 00:15:01.867 fused_ordering(294) 00:15:01.867 fused_ordering(295) 00:15:01.867 fused_ordering(296) 00:15:01.867 fused_ordering(297) 00:15:01.867 fused_ordering(298) 00:15:01.867 fused_ordering(299) 00:15:01.867 fused_ordering(300) 00:15:01.867 fused_ordering(301) 00:15:01.867 fused_ordering(302) 00:15:01.867 fused_ordering(303) 00:15:01.867 fused_ordering(304) 00:15:01.867 fused_ordering(305) 00:15:01.867 fused_ordering(306) 00:15:01.867 fused_ordering(307) 00:15:01.867 fused_ordering(308) 00:15:01.867 fused_ordering(309) 00:15:01.867 fused_ordering(310) 00:15:01.867 fused_ordering(311) 00:15:01.867 fused_ordering(312) 00:15:01.867 fused_ordering(313) 00:15:01.867 fused_ordering(314) 00:15:01.867 fused_ordering(315) 00:15:01.867 fused_ordering(316) 00:15:01.867 fused_ordering(317) 00:15:01.867 fused_ordering(318) 00:15:01.867 fused_ordering(319) 00:15:01.867 fused_ordering(320) 00:15:01.867 fused_ordering(321) 00:15:01.867 fused_ordering(322) 00:15:01.867 fused_ordering(323) 00:15:01.867 fused_ordering(324) 00:15:01.867 fused_ordering(325) 00:15:01.867 fused_ordering(326) 00:15:01.867 fused_ordering(327) 00:15:01.867 fused_ordering(328) 00:15:01.867 fused_ordering(329) 00:15:01.867 fused_ordering(330) 00:15:01.867 fused_ordering(331) 00:15:01.867 fused_ordering(332) 00:15:01.867 fused_ordering(333) 00:15:01.867 fused_ordering(334) 00:15:01.867 fused_ordering(335) 00:15:01.867 fused_ordering(336) 00:15:01.867 fused_ordering(337) 00:15:01.867 fused_ordering(338) 00:15:01.867 fused_ordering(339) 00:15:01.867 fused_ordering(340) 00:15:01.867 fused_ordering(341) 00:15:01.867 fused_ordering(342) 00:15:01.867 fused_ordering(343) 00:15:01.867 fused_ordering(344) 00:15:01.867 fused_ordering(345) 00:15:01.867 fused_ordering(346) 00:15:01.867 fused_ordering(347) 00:15:01.867 fused_ordering(348) 00:15:01.867 fused_ordering(349) 00:15:01.867 fused_ordering(350) 00:15:01.867 fused_ordering(351) 00:15:01.867 fused_ordering(352) 00:15:01.867 fused_ordering(353) 00:15:01.867 fused_ordering(354) 00:15:01.867 fused_ordering(355) 00:15:01.867 fused_ordering(356) 00:15:01.867 fused_ordering(357) 00:15:01.867 fused_ordering(358) 00:15:01.867 fused_ordering(359) 00:15:01.867 fused_ordering(360) 00:15:01.867 fused_ordering(361) 00:15:01.867 fused_ordering(362) 00:15:01.867 fused_ordering(363) 00:15:01.867 fused_ordering(364) 00:15:01.867 fused_ordering(365) 00:15:01.867 fused_ordering(366) 00:15:01.867 fused_ordering(367) 00:15:01.867 fused_ordering(368) 00:15:01.867 fused_ordering(369) 00:15:01.867 fused_ordering(370) 00:15:01.867 fused_ordering(371) 00:15:01.867 fused_ordering(372) 00:15:01.867 fused_ordering(373) 00:15:01.867 fused_ordering(374) 00:15:01.867 fused_ordering(375) 00:15:01.867 fused_ordering(376) 00:15:01.867 fused_ordering(377) 00:15:01.867 fused_ordering(378) 00:15:01.867 fused_ordering(379) 00:15:01.867 fused_ordering(380) 00:15:01.867 fused_ordering(381) 00:15:01.867 fused_ordering(382) 00:15:01.867 fused_ordering(383) 00:15:01.867 fused_ordering(384) 00:15:01.867 fused_ordering(385) 00:15:01.867 fused_ordering(386) 00:15:01.867 fused_ordering(387) 00:15:01.867 fused_ordering(388) 00:15:01.867 fused_ordering(389) 00:15:01.867 fused_ordering(390) 00:15:01.867 fused_ordering(391) 00:15:01.867 fused_ordering(392) 00:15:01.867 fused_ordering(393) 00:15:01.867 fused_ordering(394) 00:15:01.867 fused_ordering(395) 00:15:01.867 fused_ordering(396) 00:15:01.867 fused_ordering(397) 00:15:01.867 fused_ordering(398) 00:15:01.867 fused_ordering(399) 00:15:01.867 fused_ordering(400) 00:15:01.867 fused_ordering(401) 00:15:01.867 fused_ordering(402) 00:15:01.867 fused_ordering(403) 00:15:01.867 fused_ordering(404) 00:15:01.867 fused_ordering(405) 00:15:01.867 fused_ordering(406) 00:15:01.867 fused_ordering(407) 00:15:01.867 fused_ordering(408) 00:15:01.867 fused_ordering(409) 00:15:01.867 fused_ordering(410) 00:15:02.434 fused_ordering(411) 00:15:02.434 fused_ordering(412) 00:15:02.434 fused_ordering(413) 00:15:02.434 fused_ordering(414) 00:15:02.434 fused_ordering(415) 00:15:02.434 fused_ordering(416) 00:15:02.434 fused_ordering(417) 00:15:02.434 fused_ordering(418) 00:15:02.434 fused_ordering(419) 00:15:02.434 fused_ordering(420) 00:15:02.434 fused_ordering(421) 00:15:02.434 fused_ordering(422) 00:15:02.434 fused_ordering(423) 00:15:02.434 fused_ordering(424) 00:15:02.434 fused_ordering(425) 00:15:02.434 fused_ordering(426) 00:15:02.434 fused_ordering(427) 00:15:02.434 fused_ordering(428) 00:15:02.434 fused_ordering(429) 00:15:02.434 fused_ordering(430) 00:15:02.434 fused_ordering(431) 00:15:02.434 fused_ordering(432) 00:15:02.434 fused_ordering(433) 00:15:02.434 fused_ordering(434) 00:15:02.434 fused_ordering(435) 00:15:02.434 fused_ordering(436) 00:15:02.434 fused_ordering(437) 00:15:02.434 fused_ordering(438) 00:15:02.434 fused_ordering(439) 00:15:02.434 fused_ordering(440) 00:15:02.434 fused_ordering(441) 00:15:02.434 fused_ordering(442) 00:15:02.434 fused_ordering(443) 00:15:02.434 fused_ordering(444) 00:15:02.434 fused_ordering(445) 00:15:02.434 fused_ordering(446) 00:15:02.434 fused_ordering(447) 00:15:02.434 fused_ordering(448) 00:15:02.434 fused_ordering(449) 00:15:02.434 fused_ordering(450) 00:15:02.434 fused_ordering(451) 00:15:02.434 fused_ordering(452) 00:15:02.434 fused_ordering(453) 00:15:02.434 fused_ordering(454) 00:15:02.434 fused_ordering(455) 00:15:02.434 fused_ordering(456) 00:15:02.434 fused_ordering(457) 00:15:02.434 fused_ordering(458) 00:15:02.434 fused_ordering(459) 00:15:02.434 fused_ordering(460) 00:15:02.434 fused_ordering(461) 00:15:02.434 fused_ordering(462) 00:15:02.434 fused_ordering(463) 00:15:02.434 fused_ordering(464) 00:15:02.434 fused_ordering(465) 00:15:02.434 fused_ordering(466) 00:15:02.434 fused_ordering(467) 00:15:02.434 fused_ordering(468) 00:15:02.434 fused_ordering(469) 00:15:02.434 fused_ordering(470) 00:15:02.435 fused_ordering(471) 00:15:02.435 fused_ordering(472) 00:15:02.435 fused_ordering(473) 00:15:02.435 fused_ordering(474) 00:15:02.435 fused_ordering(475) 00:15:02.435 fused_ordering(476) 00:15:02.435 fused_ordering(477) 00:15:02.435 fused_ordering(478) 00:15:02.435 fused_ordering(479) 00:15:02.435 fused_ordering(480) 00:15:02.435 fused_ordering(481) 00:15:02.435 fused_ordering(482) 00:15:02.435 fused_ordering(483) 00:15:02.435 fused_ordering(484) 00:15:02.435 fused_ordering(485) 00:15:02.435 fused_ordering(486) 00:15:02.435 fused_ordering(487) 00:15:02.435 fused_ordering(488) 00:15:02.435 fused_ordering(489) 00:15:02.435 fused_ordering(490) 00:15:02.435 fused_ordering(491) 00:15:02.435 fused_ordering(492) 00:15:02.435 fused_ordering(493) 00:15:02.435 fused_ordering(494) 00:15:02.435 fused_ordering(495) 00:15:02.435 fused_ordering(496) 00:15:02.435 fused_ordering(497) 00:15:02.435 fused_ordering(498) 00:15:02.435 fused_ordering(499) 00:15:02.435 fused_ordering(500) 00:15:02.435 fused_ordering(501) 00:15:02.435 fused_ordering(502) 00:15:02.435 fused_ordering(503) 00:15:02.435 fused_ordering(504) 00:15:02.435 fused_ordering(505) 00:15:02.435 fused_ordering(506) 00:15:02.435 fused_ordering(507) 00:15:02.435 fused_ordering(508) 00:15:02.435 fused_ordering(509) 00:15:02.435 fused_ordering(510) 00:15:02.435 fused_ordering(511) 00:15:02.435 fused_ordering(512) 00:15:02.435 fused_ordering(513) 00:15:02.435 fused_ordering(514) 00:15:02.435 fused_ordering(515) 00:15:02.435 fused_ordering(516) 00:15:02.435 fused_ordering(517) 00:15:02.435 fused_ordering(518) 00:15:02.435 fused_ordering(519) 00:15:02.435 fused_ordering(520) 00:15:02.435 fused_ordering(521) 00:15:02.435 fused_ordering(522) 00:15:02.435 fused_ordering(523) 00:15:02.435 fused_ordering(524) 00:15:02.435 fused_ordering(525) 00:15:02.435 fused_ordering(526) 00:15:02.435 fused_ordering(527) 00:15:02.435 fused_ordering(528) 00:15:02.435 fused_ordering(529) 00:15:02.435 fused_ordering(530) 00:15:02.435 fused_ordering(531) 00:15:02.435 fused_ordering(532) 00:15:02.435 fused_ordering(533) 00:15:02.435 fused_ordering(534) 00:15:02.435 fused_ordering(535) 00:15:02.435 fused_ordering(536) 00:15:02.435 fused_ordering(537) 00:15:02.435 fused_ordering(538) 00:15:02.435 fused_ordering(539) 00:15:02.435 fused_ordering(540) 00:15:02.435 fused_ordering(541) 00:15:02.435 fused_ordering(542) 00:15:02.435 fused_ordering(543) 00:15:02.435 fused_ordering(544) 00:15:02.435 fused_ordering(545) 00:15:02.435 fused_ordering(546) 00:15:02.435 fused_ordering(547) 00:15:02.435 fused_ordering(548) 00:15:02.435 fused_ordering(549) 00:15:02.435 fused_ordering(550) 00:15:02.435 fused_ordering(551) 00:15:02.435 fused_ordering(552) 00:15:02.435 fused_ordering(553) 00:15:02.435 fused_ordering(554) 00:15:02.435 fused_ordering(555) 00:15:02.435 fused_ordering(556) 00:15:02.435 fused_ordering(557) 00:15:02.435 fused_ordering(558) 00:15:02.435 fused_ordering(559) 00:15:02.435 fused_ordering(560) 00:15:02.435 fused_ordering(561) 00:15:02.435 fused_ordering(562) 00:15:02.435 fused_ordering(563) 00:15:02.435 fused_ordering(564) 00:15:02.435 fused_ordering(565) 00:15:02.435 fused_ordering(566) 00:15:02.435 fused_ordering(567) 00:15:02.435 fused_ordering(568) 00:15:02.435 fused_ordering(569) 00:15:02.435 fused_ordering(570) 00:15:02.435 fused_ordering(571) 00:15:02.435 fused_ordering(572) 00:15:02.435 fused_ordering(573) 00:15:02.435 fused_ordering(574) 00:15:02.435 fused_ordering(575) 00:15:02.435 fused_ordering(576) 00:15:02.435 fused_ordering(577) 00:15:02.435 fused_ordering(578) 00:15:02.435 fused_ordering(579) 00:15:02.435 fused_ordering(580) 00:15:02.435 fused_ordering(581) 00:15:02.435 fused_ordering(582) 00:15:02.435 fused_ordering(583) 00:15:02.435 fused_ordering(584) 00:15:02.435 fused_ordering(585) 00:15:02.435 fused_ordering(586) 00:15:02.435 fused_ordering(587) 00:15:02.435 fused_ordering(588) 00:15:02.435 fused_ordering(589) 00:15:02.435 fused_ordering(590) 00:15:02.435 fused_ordering(591) 00:15:02.435 fused_ordering(592) 00:15:02.435 fused_ordering(593) 00:15:02.435 fused_ordering(594) 00:15:02.435 fused_ordering(595) 00:15:02.435 fused_ordering(596) 00:15:02.435 fused_ordering(597) 00:15:02.435 fused_ordering(598) 00:15:02.435 fused_ordering(599) 00:15:02.435 fused_ordering(600) 00:15:02.435 fused_ordering(601) 00:15:02.435 fused_ordering(602) 00:15:02.435 fused_ordering(603) 00:15:02.435 fused_ordering(604) 00:15:02.435 fused_ordering(605) 00:15:02.435 fused_ordering(606) 00:15:02.435 fused_ordering(607) 00:15:02.435 fused_ordering(608) 00:15:02.435 fused_ordering(609) 00:15:02.435 fused_ordering(610) 00:15:02.435 fused_ordering(611) 00:15:02.435 fused_ordering(612) 00:15:02.435 fused_ordering(613) 00:15:02.435 fused_ordering(614) 00:15:02.435 fused_ordering(615) 00:15:03.003 fused_ordering(616) 00:15:03.003 fused_ordering(617) 00:15:03.003 fused_ordering(618) 00:15:03.003 fused_ordering(619) 00:15:03.003 fused_ordering(620) 00:15:03.003 fused_ordering(621) 00:15:03.003 fused_ordering(622) 00:15:03.003 fused_ordering(623) 00:15:03.003 fused_ordering(624) 00:15:03.003 fused_ordering(625) 00:15:03.003 fused_ordering(626) 00:15:03.003 fused_ordering(627) 00:15:03.003 fused_ordering(628) 00:15:03.003 fused_ordering(629) 00:15:03.003 fused_ordering(630) 00:15:03.003 fused_ordering(631) 00:15:03.003 fused_ordering(632) 00:15:03.003 fused_ordering(633) 00:15:03.003 fused_ordering(634) 00:15:03.003 fused_ordering(635) 00:15:03.003 fused_ordering(636) 00:15:03.003 fused_ordering(637) 00:15:03.003 fused_ordering(638) 00:15:03.003 fused_ordering(639) 00:15:03.003 fused_ordering(640) 00:15:03.003 fused_ordering(641) 00:15:03.003 fused_ordering(642) 00:15:03.003 fused_ordering(643) 00:15:03.003 fused_ordering(644) 00:15:03.003 fused_ordering(645) 00:15:03.003 fused_ordering(646) 00:15:03.003 fused_ordering(647) 00:15:03.003 fused_ordering(648) 00:15:03.003 fused_ordering(649) 00:15:03.003 fused_ordering(650) 00:15:03.003 fused_ordering(651) 00:15:03.003 fused_ordering(652) 00:15:03.003 fused_ordering(653) 00:15:03.003 fused_ordering(654) 00:15:03.003 fused_ordering(655) 00:15:03.003 fused_ordering(656) 00:15:03.003 fused_ordering(657) 00:15:03.003 fused_ordering(658) 00:15:03.003 fused_ordering(659) 00:15:03.003 fused_ordering(660) 00:15:03.003 fused_ordering(661) 00:15:03.003 fused_ordering(662) 00:15:03.003 fused_ordering(663) 00:15:03.003 fused_ordering(664) 00:15:03.003 fused_ordering(665) 00:15:03.003 fused_ordering(666) 00:15:03.003 fused_ordering(667) 00:15:03.003 fused_ordering(668) 00:15:03.003 fused_ordering(669) 00:15:03.003 fused_ordering(670) 00:15:03.003 fused_ordering(671) 00:15:03.003 fused_ordering(672) 00:15:03.003 fused_ordering(673) 00:15:03.004 fused_ordering(674) 00:15:03.004 fused_ordering(675) 00:15:03.004 fused_ordering(676) 00:15:03.004 fused_ordering(677) 00:15:03.004 fused_ordering(678) 00:15:03.004 fused_ordering(679) 00:15:03.004 fused_ordering(680) 00:15:03.004 fused_ordering(681) 00:15:03.004 fused_ordering(682) 00:15:03.004 fused_ordering(683) 00:15:03.004 fused_ordering(684) 00:15:03.004 fused_ordering(685) 00:15:03.004 fused_ordering(686) 00:15:03.004 fused_ordering(687) 00:15:03.004 fused_ordering(688) 00:15:03.004 fused_ordering(689) 00:15:03.004 fused_ordering(690) 00:15:03.004 fused_ordering(691) 00:15:03.004 fused_ordering(692) 00:15:03.004 fused_ordering(693) 00:15:03.004 fused_ordering(694) 00:15:03.004 fused_ordering(695) 00:15:03.004 fused_ordering(696) 00:15:03.004 fused_ordering(697) 00:15:03.004 fused_ordering(698) 00:15:03.004 fused_ordering(699) 00:15:03.004 fused_ordering(700) 00:15:03.004 fused_ordering(701) 00:15:03.004 fused_ordering(702) 00:15:03.004 fused_ordering(703) 00:15:03.004 fused_ordering(704) 00:15:03.004 fused_ordering(705) 00:15:03.004 fused_ordering(706) 00:15:03.004 fused_ordering(707) 00:15:03.004 fused_ordering(708) 00:15:03.004 fused_ordering(709) 00:15:03.004 fused_ordering(710) 00:15:03.004 fused_ordering(711) 00:15:03.004 fused_ordering(712) 00:15:03.004 fused_ordering(713) 00:15:03.004 fused_ordering(714) 00:15:03.004 fused_ordering(715) 00:15:03.004 fused_ordering(716) 00:15:03.004 fused_ordering(717) 00:15:03.004 fused_ordering(718) 00:15:03.004 fused_ordering(719) 00:15:03.004 fused_ordering(720) 00:15:03.004 fused_ordering(721) 00:15:03.004 fused_ordering(722) 00:15:03.004 fused_ordering(723) 00:15:03.004 fused_ordering(724) 00:15:03.004 fused_ordering(725) 00:15:03.004 fused_ordering(726) 00:15:03.004 fused_ordering(727) 00:15:03.004 fused_ordering(728) 00:15:03.004 fused_ordering(729) 00:15:03.004 fused_ordering(730) 00:15:03.004 fused_ordering(731) 00:15:03.004 fused_ordering(732) 00:15:03.004 fused_ordering(733) 00:15:03.004 fused_ordering(734) 00:15:03.004 fused_ordering(735) 00:15:03.004 fused_ordering(736) 00:15:03.004 fused_ordering(737) 00:15:03.004 fused_ordering(738) 00:15:03.004 fused_ordering(739) 00:15:03.004 fused_ordering(740) 00:15:03.004 fused_ordering(741) 00:15:03.004 fused_ordering(742) 00:15:03.004 fused_ordering(743) 00:15:03.004 fused_ordering(744) 00:15:03.004 fused_ordering(745) 00:15:03.004 fused_ordering(746) 00:15:03.004 fused_ordering(747) 00:15:03.004 fused_ordering(748) 00:15:03.004 fused_ordering(749) 00:15:03.004 fused_ordering(750) 00:15:03.004 fused_ordering(751) 00:15:03.004 fused_ordering(752) 00:15:03.004 fused_ordering(753) 00:15:03.004 fused_ordering(754) 00:15:03.004 fused_ordering(755) 00:15:03.004 fused_ordering(756) 00:15:03.004 fused_ordering(757) 00:15:03.004 fused_ordering(758) 00:15:03.004 fused_ordering(759) 00:15:03.004 fused_ordering(760) 00:15:03.004 fused_ordering(761) 00:15:03.004 fused_ordering(762) 00:15:03.004 fused_ordering(763) 00:15:03.004 fused_ordering(764) 00:15:03.004 fused_ordering(765) 00:15:03.004 fused_ordering(766) 00:15:03.004 fused_ordering(767) 00:15:03.004 fused_ordering(768) 00:15:03.004 fused_ordering(769) 00:15:03.004 fused_ordering(770) 00:15:03.004 fused_ordering(771) 00:15:03.004 fused_ordering(772) 00:15:03.004 fused_ordering(773) 00:15:03.004 fused_ordering(774) 00:15:03.004 fused_ordering(775) 00:15:03.004 fused_ordering(776) 00:15:03.004 fused_ordering(777) 00:15:03.004 fused_ordering(778) 00:15:03.004 fused_ordering(779) 00:15:03.004 fused_ordering(780) 00:15:03.004 fused_ordering(781) 00:15:03.004 fused_ordering(782) 00:15:03.004 fused_ordering(783) 00:15:03.004 fused_ordering(784) 00:15:03.004 fused_ordering(785) 00:15:03.004 fused_ordering(786) 00:15:03.004 fused_ordering(787) 00:15:03.004 fused_ordering(788) 00:15:03.004 fused_ordering(789) 00:15:03.004 fused_ordering(790) 00:15:03.004 fused_ordering(791) 00:15:03.004 fused_ordering(792) 00:15:03.004 fused_ordering(793) 00:15:03.004 fused_ordering(794) 00:15:03.004 fused_ordering(795) 00:15:03.004 fused_ordering(796) 00:15:03.004 fused_ordering(797) 00:15:03.004 fused_ordering(798) 00:15:03.004 fused_ordering(799) 00:15:03.004 fused_ordering(800) 00:15:03.004 fused_ordering(801) 00:15:03.004 fused_ordering(802) 00:15:03.004 fused_ordering(803) 00:15:03.004 fused_ordering(804) 00:15:03.004 fused_ordering(805) 00:15:03.004 fused_ordering(806) 00:15:03.004 fused_ordering(807) 00:15:03.004 fused_ordering(808) 00:15:03.004 fused_ordering(809) 00:15:03.004 fused_ordering(810) 00:15:03.004 fused_ordering(811) 00:15:03.004 fused_ordering(812) 00:15:03.005 fused_ordering(813) 00:15:03.005 fused_ordering(814) 00:15:03.005 fused_ordering(815) 00:15:03.005 fused_ordering(816) 00:15:03.005 fused_ordering(817) 00:15:03.005 fused_ordering(818) 00:15:03.005 fused_ordering(819) 00:15:03.005 fused_ordering(820) 00:15:03.973 fused_ordering(821) 00:15:03.973 fused_ordering(822) 00:15:03.973 fused_ordering(823) 00:15:03.973 fused_ordering(824) 00:15:03.973 fused_ordering(825) 00:15:03.973 fused_ordering(826) 00:15:03.973 fused_ordering(827) 00:15:03.973 fused_ordering(828) 00:15:03.973 fused_ordering(829) 00:15:03.973 fused_ordering(830) 00:15:03.973 fused_ordering(831) 00:15:03.973 fused_ordering(832) 00:15:03.973 fused_ordering(833) 00:15:03.973 fused_ordering(834) 00:15:03.973 fused_ordering(835) 00:15:03.973 fused_ordering(836) 00:15:03.973 fused_ordering(837) 00:15:03.973 fused_ordering(838) 00:15:03.973 fused_ordering(839) 00:15:03.973 fused_ordering(840) 00:15:03.973 fused_ordering(841) 00:15:03.973 fused_ordering(842) 00:15:03.973 fused_ordering(843) 00:15:03.973 fused_ordering(844) 00:15:03.973 fused_ordering(845) 00:15:03.973 fused_ordering(846) 00:15:03.973 fused_ordering(847) 00:15:03.973 fused_ordering(848) 00:15:03.973 fused_ordering(849) 00:15:03.973 fused_ordering(850) 00:15:03.973 fused_ordering(851) 00:15:03.973 fused_ordering(852) 00:15:03.973 fused_ordering(853) 00:15:03.973 fused_ordering(854) 00:15:03.973 fused_ordering(855) 00:15:03.973 fused_ordering(856) 00:15:03.973 fused_ordering(857) 00:15:03.973 fused_ordering(858) 00:15:03.973 fused_ordering(859) 00:15:03.973 fused_ordering(860) 00:15:03.973 fused_ordering(861) 00:15:03.973 fused_ordering(862) 00:15:03.973 fused_ordering(863) 00:15:03.973 fused_ordering(864) 00:15:03.973 fused_ordering(865) 00:15:03.973 fused_ordering(866) 00:15:03.973 fused_ordering(867) 00:15:03.973 fused_ordering(868) 00:15:03.973 fused_ordering(869) 00:15:03.973 fused_ordering(870) 00:15:03.973 fused_ordering(871) 00:15:03.973 fused_ordering(872) 00:15:03.973 fused_ordering(873) 00:15:03.973 fused_ordering(874) 00:15:03.973 fused_ordering(875) 00:15:03.973 fused_ordering(876) 00:15:03.973 fused_ordering(877) 00:15:03.973 fused_ordering(878) 00:15:03.973 fused_ordering(879) 00:15:03.973 fused_ordering(880) 00:15:03.973 fused_ordering(881) 00:15:03.973 fused_ordering(882) 00:15:03.973 fused_ordering(883) 00:15:03.973 fused_ordering(884) 00:15:03.973 fused_ordering(885) 00:15:03.973 fused_ordering(886) 00:15:03.973 fused_ordering(887) 00:15:03.973 fused_ordering(888) 00:15:03.973 fused_ordering(889) 00:15:03.973 fused_ordering(890) 00:15:03.973 fused_ordering(891) 00:15:03.973 fused_ordering(892) 00:15:03.973 fused_ordering(893) 00:15:03.973 fused_ordering(894) 00:15:03.973 fused_ordering(895) 00:15:03.973 fused_ordering(896) 00:15:03.973 fused_ordering(897) 00:15:03.973 fused_ordering(898) 00:15:03.973 fused_ordering(899) 00:15:03.973 fused_ordering(900) 00:15:03.973 fused_ordering(901) 00:15:03.973 fused_ordering(902) 00:15:03.973 fused_ordering(903) 00:15:03.973 fused_ordering(904) 00:15:03.973 fused_ordering(905) 00:15:03.973 fused_ordering(906) 00:15:03.973 fused_ordering(907) 00:15:03.973 fused_ordering(908) 00:15:03.973 fused_ordering(909) 00:15:03.973 fused_ordering(910) 00:15:03.973 fused_ordering(911) 00:15:03.973 fused_ordering(912) 00:15:03.973 fused_ordering(913) 00:15:03.973 fused_ordering(914) 00:15:03.973 fused_ordering(915) 00:15:03.973 fused_ordering(916) 00:15:03.973 fused_ordering(917) 00:15:03.973 fused_ordering(918) 00:15:03.973 fused_ordering(919) 00:15:03.973 fused_ordering(920) 00:15:03.973 fused_ordering(921) 00:15:03.974 fused_ordering(922) 00:15:03.974 fused_ordering(923) 00:15:03.974 fused_ordering(924) 00:15:03.974 fused_ordering(925) 00:15:03.974 fused_ordering(926) 00:15:03.974 fused_ordering(927) 00:15:03.974 fused_ordering(928) 00:15:03.974 fused_ordering(929) 00:15:03.974 fused_ordering(930) 00:15:03.974 fused_ordering(931) 00:15:03.974 fused_ordering(932) 00:15:03.974 fused_ordering(933) 00:15:03.974 fused_ordering(934) 00:15:03.974 fused_ordering(935) 00:15:03.974 fused_ordering(936) 00:15:03.974 fused_ordering(937) 00:15:03.974 fused_ordering(938) 00:15:03.974 fused_ordering(939) 00:15:03.974 fused_ordering(940) 00:15:03.974 fused_ordering(941) 00:15:03.974 fused_ordering(942) 00:15:03.974 fused_ordering(943) 00:15:03.974 fused_ordering(944) 00:15:03.974 fused_ordering(945) 00:15:03.974 fused_ordering(946) 00:15:03.974 fused_ordering(947) 00:15:03.974 fused_ordering(948) 00:15:03.974 fused_ordering(949) 00:15:03.974 fused_ordering(950) 00:15:03.974 fused_ordering(951) 00:15:03.974 fused_ordering(952) 00:15:03.974 fused_ordering(953) 00:15:03.974 fused_ordering(954) 00:15:03.974 fused_ordering(955) 00:15:03.974 fused_ordering(956) 00:15:03.974 fused_ordering(957) 00:15:03.974 fused_ordering(958) 00:15:03.974 fused_ordering(959) 00:15:03.974 fused_ordering(960) 00:15:03.974 fused_ordering(961) 00:15:03.974 fused_ordering(962) 00:15:03.974 fused_ordering(963) 00:15:03.974 fused_ordering(964) 00:15:03.974 fused_ordering(965) 00:15:03.974 fused_ordering(966) 00:15:03.974 fused_ordering(967) 00:15:03.974 fused_ordering(968) 00:15:03.974 fused_ordering(969) 00:15:03.974 fused_ordering(970) 00:15:03.974 fused_ordering(971) 00:15:03.974 fused_ordering(972) 00:15:03.974 fused_ordering(973) 00:15:03.974 fused_ordering(974) 00:15:03.974 fused_ordering(975) 00:15:03.974 fused_ordering(976) 00:15:03.974 fused_ordering(977) 00:15:03.974 fused_ordering(978) 00:15:03.974 fused_ordering(979) 00:15:03.974 fused_ordering(980) 00:15:03.974 fused_ordering(981) 00:15:03.974 fused_ordering(982) 00:15:03.974 fused_ordering(983) 00:15:03.974 fused_ordering(984) 00:15:03.974 fused_ordering(985) 00:15:03.974 fused_ordering(986) 00:15:03.974 fused_ordering(987) 00:15:03.974 fused_ordering(988) 00:15:03.974 fused_ordering(989) 00:15:03.974 fused_ordering(990) 00:15:03.974 fused_ordering(991) 00:15:03.974 fused_ordering(992) 00:15:03.974 fused_ordering(993) 00:15:03.974 fused_ordering(994) 00:15:03.974 fused_ordering(995) 00:15:03.974 fused_ordering(996) 00:15:03.974 fused_ordering(997) 00:15:03.974 fused_ordering(998) 00:15:03.974 fused_ordering(999) 00:15:03.974 fused_ordering(1000) 00:15:03.974 fused_ordering(1001) 00:15:03.974 fused_ordering(1002) 00:15:03.974 fused_ordering(1003) 00:15:03.974 fused_ordering(1004) 00:15:03.974 fused_ordering(1005) 00:15:03.974 fused_ordering(1006) 00:15:03.974 fused_ordering(1007) 00:15:03.974 fused_ordering(1008) 00:15:03.974 fused_ordering(1009) 00:15:03.974 fused_ordering(1010) 00:15:03.974 fused_ordering(1011) 00:15:03.974 fused_ordering(1012) 00:15:03.974 fused_ordering(1013) 00:15:03.974 fused_ordering(1014) 00:15:03.974 fused_ordering(1015) 00:15:03.974 fused_ordering(1016) 00:15:03.974 fused_ordering(1017) 00:15:03.974 fused_ordering(1018) 00:15:03.974 fused_ordering(1019) 00:15:03.974 fused_ordering(1020) 00:15:03.974 fused_ordering(1021) 00:15:03.974 fused_ordering(1022) 00:15:03.974 fused_ordering(1023) 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.974 rmmod nvme_tcp 00:15:03.974 rmmod nvme_fabrics 00:15:03.974 rmmod nvme_keyring 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 77122 ']' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 77122 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 77122 ']' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 77122 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77122 00:15:03.974 killing process with pid 77122 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77122' 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 77122 00:15:03.974 00:34:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 77122 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:04.952 00:15:04.952 real 0m5.733s 00:15:04.952 user 0m6.925s 00:15:04.952 sys 0m1.643s 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.952 00:34:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.952 ************************************ 00:15:04.952 END TEST nvmf_fused_ordering 00:15:04.952 ************************************ 00:15:05.210 00:34:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:05.210 00:34:09 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:05.210 00:34:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:05.210 00:34:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.210 00:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.210 ************************************ 00:15:05.210 START TEST nvmf_delete_subsystem 00:15:05.210 ************************************ 00:15:05.210 00:34:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:05.210 * Looking for test storage... 00:15:05.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:05.210 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:05.211 Cannot find device "nvmf_tgt_br" 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.211 Cannot find device "nvmf_tgt_br2" 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:05.211 Cannot find device "nvmf_tgt_br" 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:05.211 Cannot find device "nvmf_tgt_br2" 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:15:05.211 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:05.469 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:05.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:05.470 00:15:05.470 --- 10.0.0.2 ping statistics --- 00:15:05.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.470 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:05.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:05.470 00:15:05.470 --- 10.0.0.3 ping statistics --- 00:15:05.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.470 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:05.470 00:15:05.470 --- 10.0.0.1 ping statistics --- 00:15:05.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.470 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.470 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=77400 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 77400 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 77400 ']' 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.728 00:34:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.728 [2024-07-12 00:34:10.553237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:05.728 [2024-07-12 00:34:10.554088] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.985 [2024-07-12 00:34:10.733910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:06.243 [2024-07-12 00:34:10.994263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.243 [2024-07-12 00:34:10.994352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.243 [2024-07-12 00:34:10.994387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.243 [2024-07-12 00:34:10.994402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.243 [2024-07-12 00:34:10.994426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.243 [2024-07-12 00:34:10.994626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.243 [2024-07-12 00:34:10.994658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 [2024-07-12 00:34:11.553914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 [2024-07-12 00:34:11.575701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 NULL1 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 Delay0 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.811 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=77451 00:15:06.812 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:06.812 00:34:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:07.070 [2024-07-12 00:34:11.828148] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:08.986 00:34:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.986 00:34:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.987 00:34:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 [2024-07-12 00:34:13.882888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000ff80 is same with the state(5) to be set 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Write completed with error (sct=0, sc=8) 00:15:08.987 starting I/O failed: -6 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.987 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 starting I/O failed: -6 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Read completed with error (sct=0, sc=8) 00:15:08.988 Write completed with error (sct=0, sc=8) 00:15:09.923 [2024-07-12 00:34:14.847487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f080 is same with the state(5) to be set 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 [2024-07-12 00:34:14.881917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010200 is same with the state(5) to be set 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Write completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 Read completed with error (sct=0, sc=8) 00:15:10.181 [2024-07-12 00:34:14.883219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000010700 is same with the state(5) to be set 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 [2024-07-12 00:34:14.883987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000fa80 is same with the state(5) to be set 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 00:34:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.182 00:34:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:10.182 00:34:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77451 00:15:10.182 00:34:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 Read completed with error (sct=0, sc=8) 00:15:10.182 Write completed with error (sct=0, sc=8) 00:15:10.182 [2024-07-12 00:34:14.889231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500000f580 is same with the state(5) to be set 00:15:10.182 Initializing NVMe Controllers 00:15:10.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.182 Controller IO queue size 128, less than required. 00:15:10.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:10.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:10.182 Initialization complete. Launching workers. 00:15:10.182 ======================================================== 00:15:10.182 Latency(us) 00:15:10.182 Device Information : IOPS MiB/s Average min max 00:15:10.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.07 0.09 893255.75 1178.89 1017903.76 00:15:10.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.34 0.08 908939.94 2736.55 1015966.09 00:15:10.182 ======================================================== 00:15:10.182 Total : 355.41 0.17 900508.05 1178.89 1017903.76 00:15:10.182 00:15:10.182 [2024-07-12 00:34:14.891154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500000f080 (9): Bad file descriptor 00:15:10.182 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77451 00:15:10.748 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (77451) - No such process 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 77451 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 77451 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 77451 00:15:10.748 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:10.749 [2024-07-12 00:34:15.415173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=77499 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:10.749 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.749 [2024-07-12 00:34:15.646899] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:11.007 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.007 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:11.007 00:34:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.573 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.573 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:11.573 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.140 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.140 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:12.140 00:34:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.707 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.707 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:12.707 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:13.293 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:13.293 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:13.293 00:34:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:13.551 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:13.551 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:13.551 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:13.810 Initializing NVMe Controllers 00:15:13.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.810 Controller IO queue size 128, less than required. 00:15:13.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:13.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:13.810 Initialization complete. Launching workers. 00:15:13.810 ======================================================== 00:15:13.810 Latency(us) 00:15:13.810 Device Information : IOPS MiB/s Average min max 00:15:13.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003985.19 1000247.30 1012685.31 00:15:13.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007545.90 1000815.17 1014563.46 00:15:13.810 ======================================================== 00:15:13.810 Total : 256.00 0.12 1005765.54 1000247.30 1014563.46 00:15:13.810 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 77499 00:15:14.068 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (77499) - No such process 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 77499 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.068 00:34:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.326 rmmod nvme_tcp 00:15:14.326 rmmod nvme_fabrics 00:15:14.326 rmmod nvme_keyring 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 77400 ']' 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 77400 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 77400 ']' 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 77400 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.326 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77400 00:15:14.326 killing process with pid 77400 00:15:14.327 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:14.327 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:14.327 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77400' 00:15:14.327 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 77400 00:15:14.327 00:34:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 77400 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:15.701 00:15:15.701 real 0m10.429s 00:15:15.701 user 0m30.243s 00:15:15.701 sys 0m1.755s 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.701 00:34:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:15.701 ************************************ 00:15:15.701 END TEST nvmf_delete_subsystem 00:15:15.701 ************************************ 00:15:15.701 00:34:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.701 00:34:20 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:15.701 00:34:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.701 00:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.701 00:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.701 ************************************ 00:15:15.701 START TEST nvmf_ns_masking 00:15:15.701 ************************************ 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:15.701 * Looking for test storage... 00:15:15.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.701 00:34:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6d3c1dbf-c4e8-4c68-bcd6-363d35abd19d 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4bc76709-7742-45a7-b5b2-f53eaa56c424 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b9d3d6c0-5037-44b4-b031-5def1585d557 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:15.702 Cannot find device "nvmf_tgt_br" 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.702 Cannot find device "nvmf_tgt_br2" 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:15.702 Cannot find device "nvmf_tgt_br" 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:15.702 Cannot find device "nvmf_tgt_br2" 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:15:15.702 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:15.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:15:15.998 00:15:15.998 --- 10.0.0.2 ping statistics --- 00:15:15.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.998 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:15.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:15:15.998 00:15:15.998 --- 10.0.0.3 ping statistics --- 00:15:15.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.998 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:15.998 00:15:15.998 --- 10.0.0.1 ping statistics --- 00:15:15.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.998 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.998 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:16.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=77751 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 77751 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 77751 ']' 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.256 00:34:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:16.256 [2024-07-12 00:34:21.055578] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:16.256 [2024-07-12 00:34:21.055734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.514 [2024-07-12 00:34:21.225547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.771 [2024-07-12 00:34:21.463603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.771 [2024-07-12 00:34:21.463686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.771 [2024-07-12 00:34:21.463718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.771 [2024-07-12 00:34:21.463732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.771 [2024-07-12 00:34:21.463743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.771 [2024-07-12 00:34:21.463783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.338 00:34:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.596 [2024-07-12 00:34:22.360777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.596 00:34:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:17.596 00:34:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:17.596 00:34:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.855 Malloc1 00:15:17.855 00:34:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:18.114 Malloc2 00:15:18.114 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:18.372 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:18.938 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.938 [2024-07-12 00:34:23.840165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.938 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:18.938 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9d3d6c0-5037-44b4-b031-5def1585d557 -a 10.0.0.2 -s 4420 -i 4 00:15:19.196 00:34:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.196 00:34:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.196 00:34:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.196 00:34:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.196 00:34:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:21.098 00:34:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.356 [ 0]:0x1 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=72ecbaf61b544a0a88b5f1b64c4bb2dc 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 72ecbaf61b544a0a88b5f1b64c4bb2dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.356 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:21.625 [ 0]:0x1 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=72ecbaf61b544a0a88b5f1b64c4bb2dc 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 72ecbaf61b544a0a88b5f1b64c4bb2dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.625 [ 1]:0x2 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:21.625 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.920 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.179 00:34:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9d3d6c0-5037-44b4-b031-5def1585d557 -a 10.0.0.2 -s 4420 -i 4 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:22.438 00:34:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.971 [ 0]:0x2 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.971 [ 0]:0x1 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=72ecbaf61b544a0a88b5f1b64c4bb2dc 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 72ecbaf61b544a0a88b5f1b64c4bb2dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.971 [ 1]:0x2 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.971 00:34:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.539 [ 0]:0x2 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.539 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b9d3d6c0-5037-44b4-b031-5def1585d557 -a 10.0.0.2 -s 4420 -i 4 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:25.819 00:34:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:28.350 [ 0]:0x1 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=72ecbaf61b544a0a88b5f1b64c4bb2dc 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 72ecbaf61b544a0a88b5f1b64c4bb2dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.350 [ 1]:0x2 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.350 00:34:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.350 [ 0]:0x2 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:28.350 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:28.918 [2024-07-12 00:34:33.578909] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:28.918 2024/07/12 00:34:33 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:15:28.918 request: 00:15:28.918 { 00:15:28.918 "method": "nvmf_ns_remove_host", 00:15:28.918 "params": { 00:15:28.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.918 "nsid": 2, 00:15:28.918 "host": "nqn.2016-06.io.spdk:host1" 00:15:28.918 } 00:15:28.918 } 00:15:28.918 Got JSON-RPC error response 00:15:28.918 GoRPCClient: error on JSON-RPC call 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.918 [ 0]:0x2 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f76df3cb81149f69bcdb0211b8e39ce 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f76df3cb81149f69bcdb0211b8e39ce != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=78132 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 78132 /var/tmp/host.sock 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 78132 ']' 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.918 00:34:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 [2024-07-12 00:34:33.908116] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:29.176 [2024-07-12 00:34:33.908563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78132 ] 00:15:29.176 [2024-07-12 00:34:34.074098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.435 [2024-07-12 00:34:34.307033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.434 00:34:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.434 00:34:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:30.434 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.693 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.952 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6d3c1dbf-c4e8-4c68-bcd6-363d35abd19d 00:15:30.952 00:34:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:30.952 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6D3C1DBFC4E84C68BCD6363D35ABD19D -i 00:15:31.210 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4bc76709-7742-45a7-b5b2-f53eaa56c424 00:15:31.210 00:34:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:31.210 00:34:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4BC76709774245A7B5B2F53EAA56C424 -i 00:15:31.469 00:34:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:31.727 00:34:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:31.986 00:34:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:31.986 00:34:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:32.245 nvme0n1 00:15:32.245 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:32.245 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:32.503 nvme1n2 00:15:32.503 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:32.503 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:32.503 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:32.503 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:32.503 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:32.761 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:32.761 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:32.761 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:32.761 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:33.019 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6d3c1dbf-c4e8-4c68-bcd6-363d35abd19d == \6\d\3\c\1\d\b\f\-\c\4\e\8\-\4\c\6\8\-\b\c\d\6\-\3\6\3\d\3\5\a\b\d\1\9\d ]] 00:15:33.019 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:33.019 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:33.019 00:34:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4bc76709-7742-45a7-b5b2-f53eaa56c424 == \4\b\c\7\6\7\0\9\-\7\7\4\2\-\4\5\a\7\-\b\5\b\2\-\f\5\3\e\a\a\5\6\c\4\2\4 ]] 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 78132 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 78132 ']' 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 78132 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78132 00:15:33.278 killing process with pid 78132 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78132' 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 78132 00:15:33.278 00:34:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 78132 00:15:35.808 00:34:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.066 rmmod nvme_tcp 00:15:36.066 rmmod nvme_fabrics 00:15:36.066 rmmod nvme_keyring 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 77751 ']' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 77751 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 77751 ']' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 77751 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77751 00:15:36.066 killing process with pid 77751 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77751' 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 77751 00:15:36.066 00:34:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 77751 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:37.971 00:15:37.971 real 0m22.134s 00:15:37.971 user 0m34.720s 00:15:37.971 sys 0m3.199s 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.971 00:34:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.971 ************************************ 00:15:37.971 END TEST nvmf_ns_masking 00:15:37.971 ************************************ 00:15:37.971 00:34:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.971 00:34:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:15:37.971 00:34:42 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:37.971 00:34:42 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.971 00:34:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.971 00:34:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.971 00:34:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.971 ************************************ 00:15:37.971 START TEST nvmf_vfio_user 00:15:37.971 ************************************ 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.971 * Looking for test storage... 00:15:37.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=78417 00:15:37.971 Process pid: 78417 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 78417' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 78417 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 78417 ']' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.971 00:34:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.971 [2024-07-12 00:34:42.824053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:37.971 [2024-07-12 00:34:42.825077] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.230 [2024-07-12 00:34:43.001230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.487 [2024-07-12 00:34:43.253742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.487 [2024-07-12 00:34:43.253864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.487 [2024-07-12 00:34:43.253881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.487 [2024-07-12 00:34:43.253895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.487 [2024-07-12 00:34:43.253907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.487 [2024-07-12 00:34:43.254343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.487 [2024-07-12 00:34:43.254523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.487 [2024-07-12 00:34:43.254601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.487 [2024-07-12 00:34:43.254606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.059 00:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.059 00:34:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:39.059 00:34:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:39.992 00:34:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:40.250 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:40.250 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:40.250 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.250 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:40.250 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.508 Malloc1 00:15:40.765 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:40.766 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:41.022 00:34:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:41.287 00:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.287 00:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:41.287 00:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.853 Malloc2 00:15:41.853 00:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:42.111 00:34:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:42.111 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:42.682 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:42.682 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:42.682 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.682 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.683 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.683 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:42.683 [2024-07-12 00:34:47.369162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:42.683 [2024-07-12 00:34:47.369307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78553 ] 00:15:42.683 [2024-07-12 00:34:47.544835] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:42.683 [2024-07-12 00:34:47.552141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.683 [2024-07-12 00:34:47.552191] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f987db29000 00:15:42.683 [2024-07-12 00:34:47.553089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.554079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.555091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.556102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.557093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.558092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.559099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.560116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.683 [2024-07-12 00:34:47.561115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.683 [2024-07-12 00:34:47.561156] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f987db1e000 00:15:42.683 [2024-07-12 00:34:47.562738] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.683 [2024-07-12 00:34:47.580084] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:42.683 [2024-07-12 00:34:47.580148] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:42.683 [2024-07-12 00:34:47.585227] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.683 [2024-07-12 00:34:47.585375] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:42.683 [2024-07-12 00:34:47.586133] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:42.683 [2024-07-12 00:34:47.586183] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:42.683 [2024-07-12 00:34:47.586197] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:42.683 [2024-07-12 00:34:47.588453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:42.683 [2024-07-12 00:34:47.588514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:42.683 [2024-07-12 00:34:47.588539] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:42.683 [2024-07-12 00:34:47.589201] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:42.683 [2024-07-12 00:34:47.589257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:42.683 [2024-07-12 00:34:47.589277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.590210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:42.683 [2024-07-12 00:34:47.590261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.591214] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:42.683 [2024-07-12 00:34:47.591266] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:42.683 [2024-07-12 00:34:47.591283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.591303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.591425] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:42.683 [2024-07-12 00:34:47.591439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.591451] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:42.683 [2024-07-12 00:34:47.592218] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:42.683 [2024-07-12 00:34:47.593224] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:42.683 [2024-07-12 00:34:47.594241] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.683 [2024-07-12 00:34:47.595225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.683 [2024-07-12 00:34:47.595362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:42.683 [2024-07-12 00:34:47.596240] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:42.683 [2024-07-12 00:34:47.596285] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:42.683 [2024-07-12 00:34:47.596299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:42.683 [2024-07-12 00:34:47.596355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596388] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.683 [2024-07-12 00:34:47.596413] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.683 [2024-07-12 00:34:47.596446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.683 [2024-07-12 00:34:47.596522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:42.683 [2024-07-12 00:34:47.596548] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:42.683 [2024-07-12 00:34:47.596561] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:42.683 [2024-07-12 00:34:47.596573] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:42.683 [2024-07-12 00:34:47.596583] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:42.683 [2024-07-12 00:34:47.596595] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:42.683 [2024-07-12 00:34:47.596604] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:42.683 [2024-07-12 00:34:47.596615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:42.683 [2024-07-12 00:34:47.596678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:42.683 [2024-07-12 00:34:47.596704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.683 [2024-07-12 00:34:47.596719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.683 [2024-07-12 00:34:47.596738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.683 [2024-07-12 00:34:47.596755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.683 [2024-07-12 00:34:47.596768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:42.683 [2024-07-12 00:34:47.596821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:42.683 [2024-07-12 00:34:47.596837] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:42.683 [2024-07-12 00:34:47.596847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.596892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.683 [2024-07-12 00:34:47.596906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:42.683 [2024-07-12 00:34:47.597007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.597037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:42.683 [2024-07-12 00:34:47.597059] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:42.683 [2024-07-12 00:34:47.597069] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:42.683 [2024-07-12 00:34:47.597087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:42.683 [2024-07-12 00:34:47.597114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:42.684 [2024-07-12 00:34:47.597181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597224] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.684 [2024-07-12 00:34:47.597237] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.684 [2024-07-12 00:34:47.597249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597384] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.684 [2024-07-12 00:34:47.597428] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.684 [2024-07-12 00:34:47.597445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597596] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:42.684 [2024-07-12 00:34:47.597607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:42.684 [2024-07-12 00:34:47.597620] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:42.684 [2024-07-12 00:34:47.597675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.597845] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:42.684 [2024-07-12 00:34:47.597857] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:42.684 [2024-07-12 00:34:47.597866] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:42.684 [2024-07-12 00:34:47.597874] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:42.684 [2024-07-12 00:34:47.597889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:42.684 [2024-07-12 00:34:47.597904] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:42.684 [2024-07-12 00:34:47.597916] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:42.684 [2024-07-12 00:34:47.597928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:42.684 [2024-07-12 00:34:47.597960] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.684 [2024-07-12 00:34:47.597974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.597999] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:42.684 [2024-07-12 00:34:47.598014] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:42.684 [2024-07-12 00:34:47.598028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:42.684 [2024-07-12 00:34:47.598045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.598077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.598097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:42.684 [2024-07-12 00:34:47.598111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:42.684 ===================================================== 00:15:42.684 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.684 ===================================================== 00:15:42.684 Controller Capabilities/Features 00:15:42.684 ================================ 00:15:42.684 Vendor ID: 4e58 00:15:42.684 Subsystem Vendor ID: 4e58 00:15:42.684 Serial Number: SPDK1 00:15:42.684 Model Number: SPDK bdev Controller 00:15:42.684 Firmware Version: 24.09 00:15:42.684 Recommended Arb Burst: 6 00:15:42.684 IEEE OUI Identifier: 8d 6b 50 00:15:42.684 Multi-path I/O 00:15:42.684 May have multiple subsystem ports: Yes 00:15:42.684 May have multiple controllers: Yes 00:15:42.684 Associated with SR-IOV VF: No 00:15:42.684 Max Data Transfer Size: 131072 00:15:42.684 Max Number of Namespaces: 32 00:15:42.684 Max Number of I/O Queues: 127 00:15:42.684 NVMe Specification Version (VS): 1.3 00:15:42.684 NVMe Specification Version (Identify): 1.3 00:15:42.684 Maximum Queue Entries: 256 00:15:42.684 Contiguous Queues Required: Yes 00:15:42.684 Arbitration Mechanisms Supported 00:15:42.684 Weighted Round Robin: Not Supported 00:15:42.684 Vendor Specific: Not Supported 00:15:42.684 Reset Timeout: 15000 ms 00:15:42.684 Doorbell Stride: 4 bytes 00:15:42.684 NVM Subsystem Reset: Not Supported 00:15:42.684 Command Sets Supported 00:15:42.684 NVM Command Set: Supported 00:15:42.684 Boot Partition: Not Supported 00:15:42.684 Memory Page Size Minimum: 4096 bytes 00:15:42.684 Memory Page Size Maximum: 4096 bytes 00:15:42.684 Persistent Memory Region: Not Supported 00:15:42.684 Optional Asynchronous Events Supported 00:15:42.684 Namespace Attribute Notices: Supported 00:15:42.684 Firmware Activation Notices: Not Supported 00:15:42.684 ANA Change Notices: Not Supported 00:15:42.684 PLE Aggregate Log Change Notices: Not Supported 00:15:42.684 LBA Status Info Alert Notices: Not Supported 00:15:42.684 EGE Aggregate Log Change Notices: Not Supported 00:15:42.684 Normal NVM Subsystem Shutdown event: Not Supported 00:15:42.684 Zone Descriptor Change Notices: Not Supported 00:15:42.684 Discovery Log Change Notices: Not Supported 00:15:42.684 Controller Attributes 00:15:42.684 128-bit Host Identifier: Supported 00:15:42.684 Non-Operational Permissive Mode: Not Supported 00:15:42.684 NVM Sets: Not Supported 00:15:42.684 Read Recovery Levels: Not Supported 00:15:42.684 Endurance Groups: Not Supported 00:15:42.684 Predictable Latency Mode: Not Supported 00:15:42.684 Traffic Based Keep ALive: Not Supported 00:15:42.684 Namespace Granularity: Not Supported 00:15:42.684 SQ Associations: Not Supported 00:15:42.684 UUID List: Not Supported 00:15:42.684 Multi-Domain Subsystem: Not Supported 00:15:42.684 Fixed Capacity Management: Not Supported 00:15:42.684 Variable Capacity Management: Not Supported 00:15:42.684 Delete Endurance Group: Not Supported 00:15:42.684 Delete NVM Set: Not Supported 00:15:42.684 Extended LBA Formats Supported: Not Supported 00:15:42.684 Flexible Data Placement Supported: Not Supported 00:15:42.684 00:15:42.684 Controller Memory Buffer Support 00:15:42.684 ================================ 00:15:42.684 Supported: No 00:15:42.684 00:15:42.684 Persistent Memory Region Support 00:15:42.684 ================================ 00:15:42.684 Supported: No 00:15:42.684 00:15:42.684 Admin Command Set Attributes 00:15:42.684 ============================ 00:15:42.684 Security Send/Receive: Not Supported 00:15:42.684 Format NVM: Not Supported 00:15:42.684 Firmware Activate/Download: Not Supported 00:15:42.684 Namespace Management: Not Supported 00:15:42.684 Device Self-Test: Not Supported 00:15:42.684 Directives: Not Supported 00:15:42.684 NVMe-MI: Not Supported 00:15:42.684 Virtualization Management: Not Supported 00:15:42.684 Doorbell Buffer Config: Not Supported 00:15:42.684 Get LBA Status Capability: Not Supported 00:15:42.684 Command & Feature Lockdown Capability: Not Supported 00:15:42.684 Abort Command Limit: 4 00:15:42.684 Async Event Request Limit: 4 00:15:42.684 Number of Firmware Slots: N/A 00:15:42.684 Firmware Slot 1 Read-Only: N/A 00:15:42.684 Firmware Activation Without Reset: N/A 00:15:42.685 Multiple Update Detection Support: N/A 00:15:42.685 Firmware Update Granularity: No Information Provided 00:15:42.685 Per-Namespace SMART Log: No 00:15:42.685 Asymmetric Namespace Access Log Page: Not Supported 00:15:42.685 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:42.685 Command Effects Log Page: Supported 00:15:42.685 Get Log Page Extended Data: Supported 00:15:42.685 Telemetry Log Pages: Not Supported 00:15:42.685 Persistent Event Log Pages: Not Supported 00:15:42.685 Supported Log Pages Log Page: May Support 00:15:42.685 Commands Supported & Effects Log Page: Not Supported 00:15:42.685 Feature Identifiers & Effects Log Page:May Support 00:15:42.685 NVMe-MI Commands & Effects Log Page: May Support 00:15:42.685 Data Area 4 for Telemetry Log: Not Supported 00:15:42.685 Error Log Page Entries Supported: 128 00:15:42.685 Keep Alive: Supported 00:15:42.685 Keep Alive Granularity: 10000 ms 00:15:42.685 00:15:42.685 NVM Command Set Attributes 00:15:42.685 ========================== 00:15:42.685 Submission Queue Entry Size 00:15:42.685 Max: 64 00:15:42.685 Min: 64 00:15:42.685 Completion Queue Entry Size 00:15:42.685 Max: 16 00:15:42.685 Min: 16 00:15:42.685 Number of Namespaces: 32 00:15:42.685 Compare Command: Supported 00:15:42.685 Write Uncorrectable Command: Not Supported 00:15:42.685 Dataset Management Command: Supported 00:15:42.685 Write Zeroes Command: Supported 00:15:42.685 Set Features Save Field: Not Supported 00:15:42.685 Reservations: Not Supported 00:15:42.685 Timestamp: Not Supported 00:15:42.685 Copy: Supported 00:15:42.685 Volatile Write Cache: Present 00:15:42.685 Atomic Write Unit (Normal): 1 00:15:42.685 Atomic Write Unit (PFail): 1 00:15:42.685 Atomic Compare & Write Unit: 1 00:15:42.685 Fused Compare & Write: Supported 00:15:42.685 Scatter-Gather List 00:15:42.685 SGL Command Set: Supported (Dword aligned) 00:15:42.685 SGL Keyed: Not Supported 00:15:42.685 SGL Bit Bucket Descriptor: Not Supported 00:15:42.685 SGL Metadata Pointer: Not Supported 00:15:42.685 Oversized SGL: Not Supported 00:15:42.685 SGL Metadata Address: Not Supported 00:15:42.685 SGL Offset: Not Supported 00:15:42.685 Transport SGL Data Block: Not Supported 00:15:42.685 Replay Protected Memory Block: Not Supported 00:15:42.685 00:15:42.685 Firmware Slot Information 00:15:42.685 ========================= 00:15:42.685 Active slot: 1 00:15:42.685 Slot 1 Firmware Revision: 24.09 00:15:42.685 00:15:42.685 00:15:42.685 Commands Supported and Effects 00:15:42.685 ============================== 00:15:42.685 Admin Commands 00:15:42.685 -------------- 00:15:42.685 Get Log Page (02h): Supported 00:15:42.685 Identify (06h): Supported 00:15:42.685 Abort (08h): Supported 00:15:42.685 Set Features (09h): Supported 00:15:42.685 Get Features (0Ah): Supported 00:15:42.685 Asynchronous Event Request (0Ch): Supported 00:15:42.685 Keep Alive (18h): Supported 00:15:42.685 I/O Commands 00:15:42.685 ------------ 00:15:42.685 Flush (00h): Supported LBA-Change 00:15:42.685 Write (01h): Supported LBA-Change 00:15:42.685 Read (02h): Supported 00:15:42.685 Compare (05h): Supported 00:15:42.685 Write Zeroes (08h): Supported LBA-Change 00:15:42.685 Dataset Management (09h): Supported LBA-Change 00:15:42.685 Copy (19h): Supported LBA-Change 00:15:42.685 00:15:42.685 Error Log 00:15:42.685 ========= 00:15:42.685 00:15:42.685 Arbitration 00:15:42.685 =========== 00:15:42.685 Arbitration Burst: 1 00:15:42.685 00:15:42.685 Power Management 00:15:42.685 ================ 00:15:42.685 Number of Power States: 1 00:15:42.685 Current Power State: Power State #0 00:15:42.685 Power State #0: 00:15:42.685 Max Power: 0.00 W 00:15:42.685 Non-Operational State: Operational 00:15:42.685 Entry Latency: Not Reported 00:15:42.685 Exit Latency: Not Reported 00:15:42.685 Relative Read Throughput: 0 00:15:42.685 Relative Read Latency: 0 00:15:42.685 Relative Write Throughput: 0 00:15:42.685 Relative Write Latency: 0 00:15:42.685 Idle Power: Not Reported 00:15:42.685 Active Power: Not Reported 00:15:42.685 Non-Operational Permissive Mode: Not Supported 00:15:42.685 00:15:42.685 Health Information 00:15:42.685 ================== 00:15:42.685 Critical Warnings: 00:15:42.685 Available Spare Space: OK 00:15:42.685 Temperature: OK 00:15:42.685 Device Reliability: OK 00:15:42.685 Read Only: No 00:15:42.685 Volatile Memory Backup: OK 00:15:42.685 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:42.685 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:42.685 Available Spare: 0% 00:15:42.685 Available Sp[2024-07-12 00:34:47.598322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:42.685 [2024-07-12 00:34:47.598343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:42.685 [2024-07-12 00:34:47.598468] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:42.685 [2024-07-12 00:34:47.598492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.685 [2024-07-12 00:34:47.598513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.685 [2024-07-12 00:34:47.598524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.685 [2024-07-12 00:34:47.598537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.685 [2024-07-12 00:34:47.602426] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:42.685 [2024-07-12 00:34:47.602475] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:42.685 [2024-07-12 00:34:47.603253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.685 [2024-07-12 00:34:47.603376] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:42.685 [2024-07-12 00:34:47.603400] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:42.685 [2024-07-12 00:34:47.604247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:42.685 [2024-07-12 00:34:47.604299] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:42.685 [2024-07-12 00:34:47.605093] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:42.685 [2024-07-12 00:34:47.612444] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.943 are Threshold: 0% 00:15:42.943 Life Percentage Used: 0% 00:15:42.943 Data Units Read: 0 00:15:42.943 Data Units Written: 0 00:15:42.943 Host Read Commands: 0 00:15:42.943 Host Write Commands: 0 00:15:42.943 Controller Busy Time: 0 minutes 00:15:42.943 Power Cycles: 0 00:15:42.943 Power On Hours: 0 hours 00:15:42.943 Unsafe Shutdowns: 0 00:15:42.943 Unrecoverable Media Errors: 0 00:15:42.943 Lifetime Error Log Entries: 0 00:15:42.943 Warning Temperature Time: 0 minutes 00:15:42.943 Critical Temperature Time: 0 minutes 00:15:42.943 00:15:42.943 Number of Queues 00:15:42.943 ================ 00:15:42.943 Number of I/O Submission Queues: 127 00:15:42.943 Number of I/O Completion Queues: 127 00:15:42.943 00:15:42.943 Active Namespaces 00:15:42.943 ================= 00:15:42.943 Namespace ID:1 00:15:42.943 Error Recovery Timeout: Unlimited 00:15:42.943 Command Set Identifier: NVM (00h) 00:15:42.943 Deallocate: Supported 00:15:42.943 Deallocated/Unwritten Error: Not Supported 00:15:42.943 Deallocated Read Value: Unknown 00:15:42.943 Deallocate in Write Zeroes: Not Supported 00:15:42.943 Deallocated Guard Field: 0xFFFF 00:15:42.943 Flush: Supported 00:15:42.943 Reservation: Supported 00:15:42.943 Namespace Sharing Capabilities: Multiple Controllers 00:15:42.943 Size (in LBAs): 131072 (0GiB) 00:15:42.943 Capacity (in LBAs): 131072 (0GiB) 00:15:42.943 Utilization (in LBAs): 131072 (0GiB) 00:15:42.943 NGUID: 75F35B9488CC448E9A52DD456E81B62E 00:15:42.943 UUID: 75f35b94-88cc-448e-9a52-dd456e81b62e 00:15:42.943 Thin Provisioning: Not Supported 00:15:42.943 Per-NS Atomic Units: Yes 00:15:42.943 Atomic Boundary Size (Normal): 0 00:15:42.943 Atomic Boundary Size (PFail): 0 00:15:42.943 Atomic Boundary Offset: 0 00:15:42.943 Maximum Single Source Range Length: 65535 00:15:42.943 Maximum Copy Length: 65535 00:15:42.943 Maximum Source Range Count: 1 00:15:42.943 NGUID/EUI64 Never Reused: No 00:15:42.943 Namespace Write Protected: No 00:15:42.943 Number of LBA Formats: 1 00:15:42.943 Current LBA Format: LBA Format #00 00:15:42.943 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:42.943 00:15:42.943 00:34:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.202 [2024-07-12 00:34:48.086972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.490 Initializing NVMe Controllers 00:15:48.490 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:48.490 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:48.490 Initialization complete. Launching workers. 00:15:48.490 ======================================================== 00:15:48.490 Latency(us) 00:15:48.490 Device Information : IOPS MiB/s Average min max 00:15:48.490 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24894.20 97.24 5137.85 1432.54 12688.42 00:15:48.490 ======================================================== 00:15:48.490 Total : 24894.20 97.24 5137.85 1432.54 12688.42 00:15:48.490 00:15:48.490 [2024-07-12 00:34:53.104332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.490 00:34:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.748 [2024-07-12 00:34:53.578647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.077 Initializing NVMe Controllers 00:15:54.077 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.077 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:54.077 Initialization complete. Launching workers. 00:15:54.077 ======================================================== 00:15:54.077 Latency(us) 00:15:54.077 Device Information : IOPS MiB/s Average min max 00:15:54.077 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15035.62 58.73 8512.27 4872.45 21293.12 00:15:54.077 ======================================================== 00:15:54.077 Total : 15035.62 58.73 8512.27 4872.45 21293.12 00:15:54.077 00:15:54.077 [2024-07-12 00:34:58.599081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.077 00:34:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.360 [2024-07-12 00:34:59.024850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.629 [2024-07-12 00:35:04.110551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.629 Initializing NVMe Controllers 00:15:59.629 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.629 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:59.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:59.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:59.629 Initialization complete. Launching workers. 00:15:59.629 Starting thread on core 2 00:15:59.629 Starting thread on core 1 00:15:59.629 Starting thread on core 3 00:15:59.629 00:35:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:59.888 [2024-07-12 00:35:04.609759] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.171 [2024-07-12 00:35:07.746507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.171 Initializing NVMe Controllers 00:16:03.171 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.171 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.171 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:03.171 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:03.171 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:03.171 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:03.171 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:16:03.171 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:03.171 Initialization complete. Launching workers. 00:16:03.171 Starting thread on core 1 with urgent priority queue 00:16:03.171 Starting thread on core 2 with urgent priority queue 00:16:03.171 Starting thread on core 0 with urgent priority queue 00:16:03.171 Starting thread on core 3 with urgent priority queue 00:16:03.171 SPDK bdev Controller (SPDK1 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:16:03.171 SPDK bdev Controller (SPDK1 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:16:03.171 SPDK bdev Controller (SPDK1 ) core 2: 640.00 IO/s 156.25 secs/100000 ios 00:16:03.171 SPDK bdev Controller (SPDK1 ) core 3: 810.67 IO/s 123.36 secs/100000 ios 00:16:03.171 ======================================================== 00:16:03.171 00:16:03.171 00:35:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:03.430 [2024-07-12 00:35:08.235004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.430 Initializing NVMe Controllers 00:16:03.430 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.430 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.430 Namespace ID: 1 size: 0GB 00:16:03.430 Initialization complete. 00:16:03.430 INFO: using host memory buffer for IO 00:16:03.430 Hello world! 00:16:03.430 [2024-07-12 00:35:08.268040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.688 00:35:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:03.947 [2024-07-12 00:35:08.744939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.884 Initializing NVMe Controllers 00:16:04.884 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.884 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.884 Initialization complete. Launching workers. 00:16:04.884 submit (in ns) avg, min, max = 9289.0, 3844.5, 7057007.3 00:16:04.884 complete (in ns) avg, min, max = 32198.6, 2252.7, 7079140.0 00:16:04.884 00:16:04.884 Submit histogram 00:16:04.884 ================ 00:16:04.884 Range in us Cumulative Count 00:16:04.884 3.840 - 3.869: 0.3060% ( 34) 00:16:04.884 3.869 - 3.898: 2.9523% ( 294) 00:16:04.884 3.898 - 3.927: 8.4968% ( 616) 00:16:04.884 3.927 - 3.956: 12.5653% ( 452) 00:16:04.884 3.956 - 3.985: 17.4257% ( 540) 00:16:04.884 3.985 - 4.015: 23.9424% ( 724) 00:16:04.884 4.015 - 4.044: 30.8551% ( 768) 00:16:04.884 4.044 - 4.073: 35.7876% ( 548) 00:16:04.884 4.073 - 4.102: 40.4770% ( 521) 00:16:04.884 4.102 - 4.131: 45.4275% ( 550) 00:16:04.884 4.131 - 4.160: 49.4059% ( 442) 00:16:04.884 4.160 - 4.189: 53.2403% ( 426) 00:16:04.884 4.189 - 4.218: 56.7507% ( 390) 00:16:04.884 4.218 - 4.247: 61.1611% ( 490) 00:16:04.884 4.247 - 4.276: 65.8236% ( 518) 00:16:04.884 4.276 - 4.305: 69.9550% ( 459) 00:16:04.884 4.305 - 4.335: 73.9244% ( 441) 00:16:04.884 4.335 - 4.364: 77.5338% ( 401) 00:16:04.884 4.364 - 4.393: 80.4140% ( 320) 00:16:04.884 4.393 - 4.422: 82.8443% ( 270) 00:16:04.884 4.422 - 4.451: 84.3204% ( 164) 00:16:04.884 4.451 - 4.480: 86.0756% ( 195) 00:16:04.884 4.480 - 4.509: 87.1917% ( 124) 00:16:04.884 4.509 - 4.538: 88.3078% ( 124) 00:16:04.884 4.538 - 4.567: 89.0999% ( 88) 00:16:04.884 4.567 - 4.596: 89.8470% ( 83) 00:16:04.884 4.596 - 4.625: 90.7111% ( 96) 00:16:04.884 4.625 - 4.655: 91.2421% ( 59) 00:16:04.884 4.655 - 4.684: 91.7642% ( 58) 00:16:04.884 4.684 - 4.713: 92.3222% ( 62) 00:16:04.884 4.713 - 4.742: 92.6913% ( 41) 00:16:04.884 4.742 - 4.771: 92.9433% ( 28) 00:16:04.884 4.771 - 4.800: 93.1593% ( 24) 00:16:04.884 4.800 - 4.829: 93.3213% ( 18) 00:16:04.884 4.829 - 4.858: 93.4023% ( 9) 00:16:04.884 4.858 - 4.887: 93.5104% ( 12) 00:16:04.884 4.887 - 4.916: 93.6274% ( 13) 00:16:04.884 4.916 - 4.945: 93.7534% ( 14) 00:16:04.884 4.945 - 4.975: 93.8704% ( 13) 00:16:04.884 4.975 - 5.004: 93.9514% ( 9) 00:16:04.884 5.004 - 5.033: 94.0594% ( 12) 00:16:04.884 5.033 - 5.062: 94.1044% ( 5) 00:16:04.884 5.062 - 5.091: 94.1494% ( 5) 00:16:04.885 5.091 - 5.120: 94.1674% ( 2) 00:16:04.885 5.120 - 5.149: 94.2394% ( 8) 00:16:04.885 5.149 - 5.178: 94.2574% ( 2) 00:16:04.885 5.178 - 5.207: 94.2934% ( 4) 00:16:04.885 5.207 - 5.236: 94.3384% ( 5) 00:16:04.885 5.236 - 5.265: 94.4014% ( 7) 00:16:04.885 5.265 - 5.295: 94.4284% ( 3) 00:16:04.885 5.295 - 5.324: 94.4734% ( 5) 00:16:04.885 5.324 - 5.353: 94.5275% ( 6) 00:16:04.885 5.353 - 5.382: 94.5365% ( 1) 00:16:04.885 5.382 - 5.411: 94.5635% ( 3) 00:16:04.885 5.411 - 5.440: 94.6085% ( 5) 00:16:04.885 5.440 - 5.469: 94.6355% ( 3) 00:16:04.885 5.469 - 5.498: 94.6715% ( 4) 00:16:04.885 5.498 - 5.527: 94.6985% ( 3) 00:16:04.885 5.527 - 5.556: 94.7255% ( 3) 00:16:04.885 5.556 - 5.585: 94.7525% ( 3) 00:16:04.885 5.585 - 5.615: 94.7885% ( 4) 00:16:04.885 5.615 - 5.644: 94.8155% ( 3) 00:16:04.885 5.644 - 5.673: 94.8875% ( 8) 00:16:04.885 5.673 - 5.702: 94.9415% ( 6) 00:16:04.885 5.702 - 5.731: 94.9595% ( 2) 00:16:04.885 5.731 - 5.760: 95.0135% ( 6) 00:16:04.885 5.760 - 5.789: 95.0765% ( 7) 00:16:04.885 5.789 - 5.818: 95.1485% ( 8) 00:16:04.885 5.818 - 5.847: 95.2025% ( 6) 00:16:04.885 5.847 - 5.876: 95.2745% ( 8) 00:16:04.885 5.876 - 5.905: 95.2925% ( 2) 00:16:04.885 5.905 - 5.935: 95.3285% ( 4) 00:16:04.885 5.935 - 5.964: 95.3735% ( 5) 00:16:04.885 5.964 - 5.993: 95.4005% ( 3) 00:16:04.885 5.993 - 6.022: 95.4815% ( 9) 00:16:04.885 6.022 - 6.051: 95.5086% ( 3) 00:16:04.885 6.051 - 6.080: 95.5266% ( 2) 00:16:04.885 6.080 - 6.109: 95.5716% ( 5) 00:16:04.885 6.109 - 6.138: 95.6796% ( 12) 00:16:04.885 6.138 - 6.167: 95.7336% ( 6) 00:16:04.885 6.167 - 6.196: 95.8146% ( 9) 00:16:04.885 6.196 - 6.225: 95.8686% ( 6) 00:16:04.885 6.225 - 6.255: 95.9136% ( 5) 00:16:04.885 6.255 - 6.284: 95.9766% ( 7) 00:16:04.885 6.284 - 6.313: 96.0036% ( 3) 00:16:04.885 6.313 - 6.342: 96.0756% ( 8) 00:16:04.885 6.342 - 6.371: 96.1116% ( 4) 00:16:04.885 6.371 - 6.400: 96.1206% ( 1) 00:16:04.885 6.400 - 6.429: 96.1656% ( 5) 00:16:04.885 6.429 - 6.458: 96.2106% ( 5) 00:16:04.885 6.458 - 6.487: 96.2376% ( 3) 00:16:04.885 6.487 - 6.516: 96.2646% ( 3) 00:16:04.885 6.516 - 6.545: 96.2916% ( 3) 00:16:04.885 6.545 - 6.575: 96.3366% ( 5) 00:16:04.885 6.575 - 6.604: 96.3726% ( 4) 00:16:04.885 6.604 - 6.633: 96.3906% ( 2) 00:16:04.885 6.633 - 6.662: 96.4086% ( 2) 00:16:04.885 6.662 - 6.691: 96.4266% ( 2) 00:16:04.885 6.691 - 6.720: 96.4356% ( 1) 00:16:04.885 6.720 - 6.749: 96.4446% ( 1) 00:16:04.885 6.749 - 6.778: 96.4986% ( 6) 00:16:04.885 6.778 - 6.807: 96.5437% ( 5) 00:16:04.885 6.807 - 6.836: 96.5707% ( 3) 00:16:04.885 6.865 - 6.895: 96.5887% ( 2) 00:16:04.885 6.895 - 6.924: 96.6427% ( 6) 00:16:04.885 6.924 - 6.953: 96.6877% ( 5) 00:16:04.885 7.011 - 7.040: 96.7147% ( 3) 00:16:04.885 7.069 - 7.098: 96.7237% ( 1) 00:16:04.885 7.098 - 7.127: 96.7777% ( 6) 00:16:04.885 7.156 - 7.185: 96.8227% ( 5) 00:16:04.885 7.185 - 7.215: 96.8497% ( 3) 00:16:04.885 7.215 - 7.244: 96.8767% ( 3) 00:16:04.885 7.244 - 7.273: 96.8947% ( 2) 00:16:04.885 7.273 - 7.302: 96.9217% ( 3) 00:16:04.885 7.302 - 7.331: 96.9307% ( 1) 00:16:04.885 7.331 - 7.360: 96.9487% ( 2) 00:16:04.885 7.360 - 7.389: 96.9667% ( 2) 00:16:04.885 7.389 - 7.418: 96.9937% ( 3) 00:16:04.885 7.418 - 7.447: 97.0117% ( 2) 00:16:04.885 7.447 - 7.505: 97.0927% ( 9) 00:16:04.885 7.505 - 7.564: 97.1107% ( 2) 00:16:04.885 7.564 - 7.622: 97.1827% ( 8) 00:16:04.885 7.622 - 7.680: 97.2277% ( 5) 00:16:04.885 7.680 - 7.738: 97.2547% ( 3) 00:16:04.885 7.738 - 7.796: 97.2997% ( 5) 00:16:04.885 7.796 - 7.855: 97.3087% ( 1) 00:16:04.885 7.855 - 7.913: 97.3627% ( 6) 00:16:04.885 7.913 - 7.971: 97.3807% ( 2) 00:16:04.885 7.971 - 8.029: 97.3987% ( 2) 00:16:04.885 8.029 - 8.087: 97.4167% ( 2) 00:16:04.885 8.145 - 8.204: 97.4527% ( 4) 00:16:04.885 8.204 - 8.262: 97.4617% ( 1) 00:16:04.885 8.262 - 8.320: 97.4977% ( 4) 00:16:04.885 8.320 - 8.378: 97.5158% ( 2) 00:16:04.885 8.378 - 8.436: 97.5338% ( 2) 00:16:04.885 8.436 - 8.495: 97.5518% ( 2) 00:16:04.885 8.495 - 8.553: 97.5698% ( 2) 00:16:04.885 8.553 - 8.611: 97.5788% ( 1) 00:16:04.885 8.611 - 8.669: 97.6058% ( 3) 00:16:04.885 8.669 - 8.727: 97.6238% ( 2) 00:16:04.885 8.727 - 8.785: 97.6508% ( 3) 00:16:04.885 8.785 - 8.844: 97.6778% ( 3) 00:16:04.885 8.844 - 8.902: 97.6868% ( 1) 00:16:04.885 8.902 - 8.960: 97.7228% ( 4) 00:16:04.885 8.960 - 9.018: 97.7678% ( 5) 00:16:04.885 9.018 - 9.076: 97.8128% ( 5) 00:16:04.885 9.076 - 9.135: 97.8398% ( 3) 00:16:04.885 9.135 - 9.193: 97.8578% ( 2) 00:16:04.885 9.193 - 9.251: 97.8848% ( 3) 00:16:04.885 9.251 - 9.309: 97.9028% ( 2) 00:16:04.885 9.309 - 9.367: 97.9118% ( 1) 00:16:04.885 9.367 - 9.425: 97.9568% ( 5) 00:16:04.885 9.425 - 9.484: 97.9748% ( 2) 00:16:04.885 9.484 - 9.542: 97.9928% ( 2) 00:16:04.885 9.542 - 9.600: 98.0108% ( 2) 00:16:04.885 9.600 - 9.658: 98.0198% ( 1) 00:16:04.885 9.658 - 9.716: 98.0288% ( 1) 00:16:04.885 9.716 - 9.775: 98.0378% ( 1) 00:16:04.885 9.775 - 9.833: 98.0558% ( 2) 00:16:04.885 9.833 - 9.891: 98.0738% ( 2) 00:16:04.885 9.949 - 10.007: 98.0918% ( 2) 00:16:04.885 10.007 - 10.065: 98.1278% ( 4) 00:16:04.885 10.065 - 10.124: 98.1548% ( 3) 00:16:04.885 10.124 - 10.182: 98.1728% ( 2) 00:16:04.885 10.182 - 10.240: 98.1998% ( 3) 00:16:04.885 10.240 - 10.298: 98.2088% ( 1) 00:16:04.885 10.298 - 10.356: 98.2268% ( 2) 00:16:04.885 10.356 - 10.415: 98.2358% ( 1) 00:16:04.885 10.415 - 10.473: 98.2628% ( 3) 00:16:04.885 10.473 - 10.531: 98.2898% ( 3) 00:16:04.885 10.589 - 10.647: 98.3168% ( 3) 00:16:04.885 10.705 - 10.764: 98.3258% ( 1) 00:16:04.885 10.822 - 10.880: 98.3438% ( 2) 00:16:04.885 10.938 - 10.996: 98.3618% ( 2) 00:16:04.885 10.996 - 11.055: 98.3708% ( 1) 00:16:04.885 11.055 - 11.113: 98.3888% ( 2) 00:16:04.885 11.113 - 11.171: 98.4158% ( 3) 00:16:04.885 11.229 - 11.287: 98.4248% ( 1) 00:16:04.885 11.404 - 11.462: 98.4518% ( 3) 00:16:04.885 11.462 - 11.520: 98.4608% ( 1) 00:16:04.885 11.578 - 11.636: 98.4698% ( 1) 00:16:04.885 11.636 - 11.695: 98.4788% ( 1) 00:16:04.885 11.811 - 11.869: 98.4878% ( 1) 00:16:04.885 11.869 - 11.927: 98.5059% ( 2) 00:16:04.885 11.927 - 11.985: 98.5149% ( 1) 00:16:04.885 11.985 - 12.044: 98.5239% ( 1) 00:16:04.885 12.044 - 12.102: 98.5419% ( 2) 00:16:04.885 12.160 - 12.218: 98.5689% ( 3) 00:16:04.885 12.218 - 12.276: 98.5959% ( 3) 00:16:04.885 12.276 - 12.335: 98.6049% ( 1) 00:16:04.885 12.393 - 12.451: 98.6139% ( 1) 00:16:04.885 12.451 - 12.509: 98.6319% ( 2) 00:16:04.885 12.509 - 12.567: 98.6409% ( 1) 00:16:04.885 12.567 - 12.625: 98.6589% ( 2) 00:16:04.885 12.625 - 12.684: 98.6679% ( 1) 00:16:04.885 12.684 - 12.742: 98.6769% ( 1) 00:16:04.885 12.916 - 12.975: 98.6859% ( 1) 00:16:04.885 12.975 - 13.033: 98.6949% ( 1) 00:16:04.885 13.091 - 13.149: 98.7129% ( 2) 00:16:04.885 13.149 - 13.207: 98.7219% ( 1) 00:16:04.885 13.207 - 13.265: 98.7309% ( 1) 00:16:04.885 13.324 - 13.382: 98.7489% ( 2) 00:16:04.885 13.615 - 13.673: 98.7579% ( 1) 00:16:04.885 13.673 - 13.731: 98.7849% ( 3) 00:16:04.885 13.964 - 14.022: 98.8029% ( 2) 00:16:04.885 14.196 - 14.255: 98.8119% ( 1) 00:16:04.885 14.255 - 14.313: 98.8209% ( 1) 00:16:04.885 14.313 - 14.371: 98.8389% ( 2) 00:16:04.885 14.604 - 14.662: 98.8479% ( 1) 00:16:04.885 14.895 - 15.011: 98.8749% ( 3) 00:16:04.885 15.011 - 15.127: 98.8929% ( 2) 00:16:04.885 15.244 - 15.360: 98.9109% ( 2) 00:16:04.885 15.360 - 15.476: 98.9199% ( 1) 00:16:04.885 15.709 - 15.825: 98.9379% ( 2) 00:16:04.885 16.058 - 16.175: 98.9469% ( 1) 00:16:04.885 16.291 - 16.407: 98.9559% ( 1) 00:16:04.885 16.407 - 16.524: 98.9649% ( 1) 00:16:04.885 17.455 - 17.571: 98.9739% ( 1) 00:16:04.885 18.036 - 18.153: 98.9829% ( 1) 00:16:04.885 18.502 - 18.618: 99.0009% ( 2) 00:16:04.885 18.618 - 18.735: 99.0099% ( 1) 00:16:04.885 18.735 - 18.851: 99.0369% ( 3) 00:16:04.885 18.851 - 18.967: 99.0819% ( 5) 00:16:04.885 18.967 - 19.084: 99.1359% ( 6) 00:16:04.885 19.084 - 19.200: 99.1899% ( 6) 00:16:04.885 19.200 - 19.316: 99.2439% ( 6) 00:16:04.885 19.316 - 19.433: 99.2889% ( 5) 00:16:04.885 19.433 - 19.549: 99.3159% ( 3) 00:16:04.885 19.549 - 19.665: 99.3699% ( 6) 00:16:04.885 19.665 - 19.782: 99.3789% ( 1) 00:16:04.885 19.782 - 19.898: 99.3969% ( 2) 00:16:04.885 19.898 - 20.015: 99.4239% ( 3) 00:16:04.885 20.015 - 20.131: 99.4419% ( 2) 00:16:04.885 20.131 - 20.247: 99.4599% ( 2) 00:16:04.885 20.247 - 20.364: 99.5140% ( 6) 00:16:04.885 20.364 - 20.480: 99.5410% ( 3) 00:16:04.885 20.480 - 20.596: 99.5680% ( 3) 00:16:04.885 20.596 - 20.713: 99.6040% ( 4) 00:16:04.886 20.713 - 20.829: 99.6670% ( 7) 00:16:04.886 20.829 - 20.945: 99.6850% ( 2) 00:16:04.886 21.178 - 21.295: 99.7030% ( 2) 00:16:04.886 21.295 - 21.411: 99.7210% ( 2) 00:16:04.886 21.411 - 21.527: 99.7390% ( 2) 00:16:04.886 21.993 - 22.109: 99.7480% ( 1) 00:16:04.886 22.458 - 22.575: 99.7570% ( 1) 00:16:04.886 22.691 - 22.807: 99.7660% ( 1) 00:16:04.886 23.040 - 23.156: 99.7750% ( 1) 00:16:04.886 23.156 - 23.273: 99.7840% ( 1) 00:16:04.886 23.273 - 23.389: 99.7930% ( 1) 00:16:04.886 24.320 - 24.436: 99.8020% ( 1) 00:16:04.886 24.553 - 24.669: 99.8110% ( 1) 00:16:04.886 26.531 - 26.647: 99.8200% ( 1) 00:16:04.886 26.647 - 26.764: 99.8290% ( 1) 00:16:04.886 27.113 - 27.229: 99.8380% ( 1) 00:16:04.886 27.462 - 27.578: 99.8470% ( 1) 00:16:04.886 27.811 - 27.927: 99.8560% ( 1) 00:16:04.886 28.276 - 28.393: 99.8650% ( 1) 00:16:04.886 28.858 - 28.975: 99.8740% ( 1) 00:16:04.886 29.324 - 29.440: 99.8830% ( 1) 00:16:04.886 3038.487 - 3053.382: 99.8920% ( 1) 00:16:04.886 3053.382 - 3068.276: 99.9010% ( 1) 00:16:04.886 3961.949 - 3991.738: 99.9280% ( 3) 00:16:04.886 3991.738 - 4021.527: 99.9820% ( 6) 00:16:04.886 4021.527 - 4051.316: 99.9910% ( 1) 00:16:04.886 7030.225 - 7060.015: 100.0000% ( 1) 00:16:04.886 00:16:04.886 Complete histogram 00:16:04.886 ================== 00:16:04.886 Range in us Cumulative Count 00:16:04.886 2.240 - 2.255: 0.0090% ( 1) 00:16:04.886 2.255 - 2.269: 0.2430% ( 26) 00:16:04.886 2.269 - 2.284: 5.7876% ( 616) 00:16:04.886 2.284 - 2.298: 19.7390% ( 1550) 00:16:04.886 2.298 - 2.313: 28.7129% ( 997) 00:16:04.886 2.313 - 2.327: 32.2682% ( 395) 00:16:04.886 2.327 - 2.342: 32.8893% ( 69) 00:16:04.886 2.342 - 2.356: 35.2025% ( 257) 00:16:04.886 2.356 - 2.371: 44.7255% ( 1058) 00:16:04.886 2.371 - 2.385: 52.3042% ( 842) 00:16:04.886 2.385 - 2.400: 54.6985% ( 266) 00:16:04.886 2.400 - 2.415: 55.6616% ( 107) 00:16:04.886 2.415 - 2.429: 56.6697% ( 112) 00:16:04.886 2.429 - 2.444: 60.9541% ( 476) 00:16:04.886 2.444 - 2.458: 66.6967% ( 638) 00:16:04.886 2.458 - 2.473: 70.0450% ( 372) 00:16:04.886 2.473 - 2.487: 71.4761% ( 159) 00:16:04.886 2.487 - 2.502: 72.1692% ( 77) 00:16:04.886 2.502 - 2.516: 72.9703% ( 89) 00:16:04.886 2.516 - 2.531: 77.3717% ( 489) 00:16:04.886 2.531 - 2.545: 84.4734% ( 789) 00:16:04.886 2.545 - 2.560: 88.2448% ( 419) 00:16:04.886 2.560 - 2.575: 89.7570% ( 168) 00:16:04.886 2.575 - 2.589: 90.5311% ( 86) 00:16:04.886 2.589 - 2.604: 91.1701% ( 71) 00:16:04.886 2.604 - 2.618: 91.7732% ( 67) 00:16:04.886 2.618 - 2.633: 92.7993% ( 114) 00:16:04.886 2.633 - 2.647: 93.7984% ( 111) 00:16:04.886 2.647 - 2.662: 94.4104% ( 68) 00:16:04.886 2.662 - 2.676: 94.7615% ( 39) 00:16:04.886 2.676 - 2.691: 94.9955% ( 26) 00:16:04.886 2.691 - 2.705: 95.2565% ( 29) 00:16:04.886 2.705 - 2.720: 95.3825% ( 14) 00:16:04.886 2.720 - 2.735: 95.5086% ( 14) 00:16:04.886 2.735 - 2.749: 95.7066% ( 22) 00:16:04.886 2.749 - 2.764: 95.8416% ( 15) 00:16:04.886 2.764 - 2.778: 95.9226% ( 9) 00:16:04.886 2.778 - 2.793: 96.0126% ( 10) 00:16:04.886 2.793 - 2.807: 96.0936% ( 9) 00:16:04.886 2.807 - 2.822: 96.1476% ( 6) 00:16:04.886 2.822 - 2.836: 96.2196% ( 8) 00:16:04.886 2.836 - 2.851: 96.3006% ( 9) 00:16:04.886 2.851 - 2.865: 96.3726% ( 8) 00:16:04.886 2.865 - 2.880: 96.4086% ( 4) 00:16:04.886 2.880 - 2.895: 96.4806% ( 8) 00:16:04.886 2.895 - 2.909: 96.5707% ( 10) 00:16:04.886 2.909 - 2.924: 96.6067% ( 4) 00:16:04.886 2.924 - 2.938: 96.6247% ( 2) 00:16:04.886 2.938 - 2.953: 96.6607% ( 4) 00:16:04.886 2.953 - 2.967: 96.6787% ( 2) 00:16:04.886 2.967 - 2.982: 96.6877% ( 1) 00:16:04.886 2.982 - 2.996: 96.7417% ( 6) 00:16:04.886 2.996 - 3.011: 96.7777% ( 4) 00:16:04.886 3.011 - 3.025: 96.8137% ( 4) 00:16:04.886 3.025 - 3.040: 96.8677% ( 6) 00:16:04.886 3.040 - 3.055: 96.9037% ( 4) 00:16:04.886 3.055 - 3.069: 96.9307% ( 3) 00:16:04.886 3.069 - 3.084: 96.9397% ( 1) 00:16:04.886 3.084 - 3.098: 96.9667% ( 3) 00:16:04.886 3.098 - 3.113: 96.9757% ( 1) 00:16:04.886 3.113 - 3.127: 97.0477% ( 8) 00:16:04.886 3.127 - 3.142: 97.0657% ( 2) 00:16:04.886 3.142 - 3.156: 97.1197% ( 6) 00:16:04.886 3.156 - 3.171: 97.1557% ( 4) 00:16:04.886 3.171 - 3.185: 97.1737% ( 2) 00:16:04.886 3.185 - 3.200: 97.1827% ( 1) 00:16:04.886 3.200 - 3.215: 97.2097% ( 3) 00:16:04.886 3.215 - 3.229: 97.2367% ( 3) 00:16:04.886 3.229 - 3.244: 97.2637% ( 3) 00:16:04.886 3.244 - 3.258: 97.2907% ( 3) 00:16:04.886 3.258 - 3.273: 97.3177% ( 3) 00:16:04.886 3.273 - 3.287: 97.3357% ( 2) 00:16:04.886 3.287 - 3.302: 97.3447% ( 1) 00:16:04.886 3.302 - 3.316: 97.3807% ( 4) 00:16:04.886 3.316 - 3.331: 97.3897% ( 1) 00:16:04.886 3.360 - 3.375: 97.3987% ( 1) 00:16:04.886 3.375 - 3.389: 97.4257% ( 3) 00:16:04.886 3.389 - 3.404: 97.4437% ( 2) 00:16:04.886 3.404 - 3.418: 97.4707% ( 3) 00:16:04.886 3.418 - 3.433: 97.4977% ( 3) 00:16:04.886 3.433 - 3.447: 97.5068% ( 1) 00:16:04.886 3.462 - 3.476: 97.5428% ( 4) 00:16:04.886 3.491 - 3.505: 97.5518% ( 1) 00:16:04.886 3.505 - 3.520: 97.5608% ( 1) 00:16:04.886 3.520 - 3.535: 97.5698% ( 1) 00:16:04.886 3.535 - 3.549: 97.5968% ( 3) 00:16:04.886 3.549 - 3.564: 97.6328% ( 4) 00:16:04.886 3.564 - 3.578: 97.6508% ( 2) 00:16:04.886 3.578 - 3.593: 97.6688% ( 2) 00:16:04.886 3.593 - 3.607: 97.6778% ( 1) 00:16:04.886 3.622 - 3.636: 97.7048% ( 3) 00:16:04.886 3.636 - 3.651: 97.7138% ( 1) 00:16:04.886 3.651 - 3.665: 97.7228% ( 1) 00:16:04.886 3.665 - 3.680: 97.7318% ( 1) 00:16:04.886 3.695 - 3.709: 97.7498% ( 2) 00:16:04.886 3.709 - 3.724: 97.7768% ( 3) 00:16:04.886 3.753 - 3.782: 97.7948% ( 2) 00:16:04.886 3.782 - 3.811: 97.8218% ( 3) 00:16:04.886 3.811 - 3.840: 97.8398% ( 2) 00:16:04.886 3.840 - 3.869: 97.8668% ( 3) 00:16:04.886 3.869 - 3.898: 97.9118% ( 5) 00:16:04.886 3.898 - 3.927: 97.9478% ( 4) 00:16:04.886 3.927 - 3.956: 97.9838% ( 4) 00:16:04.886 3.956 - 3.985: 97.9928% ( 1) 00:16:04.886 3.985 - 4.015: 98.0108% ( 2) 00:16:04.886 4.015 - 4.044: 98.0288% ( 2) 00:16:04.886 4.073 - 4.102: 98.0378% ( 1) 00:16:04.886 4.102 - 4.131: 98.0558% ( 2) 00:16:04.886 4.131 - 4.160: 98.0648% ( 1) 00:16:04.886 4.247 - 4.276: 98.0918% ( 3) 00:16:04.886 4.305 - 4.335: 98.1098% ( 2) 00:16:04.886 4.393 - 4.422: 98.1188% ( 1) 00:16:04.886 4.422 - 4.451: 98.1368% ( 2) 00:16:04.886 4.480 - 4.509: 98.1548% ( 2) 00:16:04.886 4.509 - 4.538: 98.1638% ( 1) 00:16:04.886 4.538 - 4.567: 98.1998% ( 4) 00:16:04.886 4.567 - 4.596: 98.2088% ( 1) 00:16:04.886 4.596 - 4.625: 98.2178% ( 1) 00:16:04.886 4.655 - 4.684: 98.2358% ( 2) 00:16:04.886 4.713 - 4.742: 98.2448% ( 1) 00:16:04.886 4.742 - 4.771: 98.2538% ( 1) 00:16:04.886 4.771 - 4.800: 98.2628% ( 1) 00:16:04.886 4.800 - 4.829: 98.2718% ( 1) 00:16:04.886 4.829 - 4.858: 98.2808% ( 1) 00:16:04.886 4.887 - 4.916: 98.2898% ( 1) 00:16:04.886 4.916 - 4.945: 98.3258% ( 4) 00:16:04.886 5.033 - 5.062: 98.3348% ( 1) 00:16:04.886 5.062 - 5.091: 98.3618% ( 3) 00:16:04.886 5.091 - 5.120: 98.3798% ( 2) 00:16:04.886 5.382 - 5.411: 98.3888% ( 1) 00:16:04.886 5.440 - 5.469: 98.3978% ( 1) 00:16:04.886 5.585 - 5.615: 98.4158% ( 2) 00:16:04.886 5.789 - 5.818: 98.4248% ( 1) 00:16:04.886 5.847 - 5.876: 98.4338% ( 1) 00:16:04.886 5.935 - 5.964: 98.4428% ( 1) 00:16:04.886 6.167 - 6.196: 98.4518% ( 1) 00:16:04.886 6.371 - 6.400: 98.4608% ( 1) 00:16:04.886 6.429 - 6.458: 98.4698% ( 1) 00:16:04.886 6.545 - 6.575: 98.4788% ( 1) 00:16:04.886 6.720 - 6.749: 98.4878% ( 1) 00:16:04.886 6.749 - 6.778: 98.4968% ( 1) 00:16:04.886 6.953 - 6.982: 98.5059% ( 1) 00:16:04.886 6.982 - 7.011: 98.5149% ( 1) 00:16:04.886 7.040 - 7.069: 98.5239% ( 1) 00:16:04.886 7.069 - 7.098: 98.5329% ( 1) 00:16:04.886 7.971 - 8.029: 98.5419% ( 1) 00:16:04.886 8.145 - 8.204: 98.5599% ( 2) 00:16:04.886 8.204 - 8.262: 98.5689% ( 1) 00:16:04.886 8.262 - 8.320: 98.5779% ( 1) 00:16:04.886 8.378 - 8.436: 98.5869% ( 1) 00:16:04.886 8.436 - 8.495: 98.5959% ( 1) 00:16:04.886 8.611 - 8.669: 98.6049% ( 1) 00:16:04.886 8.669 - 8.727: 98.6229% ( 2) 00:16:04.886 9.076 - 9.135: 98.6319% ( 1) 00:16:04.886 9.193 - 9.251: 98.6409% ( 1) 00:16:04.886 9.251 - 9.309: 98.6499% ( 1) 00:16:04.886 9.309 - 9.367: 98.6589% ( 1) 00:16:04.886 9.484 - 9.542: 98.6679% ( 1) 00:16:04.886 10.065 - 10.124: 98.6769% ( 1) 00:16:04.886 10.240 - 10.298: 98.6859% ( 1) 00:16:04.886 10.589 - 10.647: 98.6949% ( [2024-07-12 00:35:09.768718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.145 1) 00:16:05.145 10.647 - 10.705: 98.7129% ( 2) 00:16:05.145 10.822 - 10.880: 98.7219% ( 1) 00:16:05.145 10.996 - 11.055: 98.7309% ( 1) 00:16:05.145 11.171 - 11.229: 98.7399% ( 1) 00:16:05.145 11.404 - 11.462: 98.7489% ( 1) 00:16:05.145 12.044 - 12.102: 98.7579% ( 1) 00:16:05.145 12.102 - 12.160: 98.7669% ( 1) 00:16:05.145 12.800 - 12.858: 98.7849% ( 2) 00:16:05.145 13.207 - 13.265: 98.7939% ( 1) 00:16:05.145 13.498 - 13.556: 98.8029% ( 1) 00:16:05.145 13.556 - 13.615: 98.8119% ( 1) 00:16:05.145 13.789 - 13.847: 98.8209% ( 1) 00:16:05.145 13.964 - 14.022: 98.8389% ( 2) 00:16:05.145 14.022 - 14.080: 98.8479% ( 1) 00:16:05.145 14.487 - 14.545: 98.8569% ( 1) 00:16:05.145 15.593 - 15.709: 98.8659% ( 1) 00:16:05.145 16.756 - 16.873: 98.8749% ( 1) 00:16:05.145 16.873 - 16.989: 98.8839% ( 1) 00:16:05.145 16.989 - 17.105: 98.9019% ( 2) 00:16:05.145 17.105 - 17.222: 98.9199% ( 2) 00:16:05.145 17.222 - 17.338: 98.9469% ( 3) 00:16:05.145 17.455 - 17.571: 98.9739% ( 3) 00:16:05.145 17.571 - 17.687: 99.0009% ( 3) 00:16:05.145 17.687 - 17.804: 99.0279% ( 3) 00:16:05.145 17.804 - 17.920: 99.0549% ( 3) 00:16:05.145 17.920 - 18.036: 99.0909% ( 4) 00:16:05.145 18.153 - 18.269: 99.0999% ( 1) 00:16:05.145 18.269 - 18.385: 99.1089% ( 1) 00:16:05.145 18.385 - 18.502: 99.1179% ( 1) 00:16:05.145 18.502 - 18.618: 99.1269% ( 1) 00:16:05.145 18.618 - 18.735: 99.1449% ( 2) 00:16:05.145 18.735 - 18.851: 99.1719% ( 3) 00:16:05.145 18.851 - 18.967: 99.1899% ( 2) 00:16:05.145 18.967 - 19.084: 99.2079% ( 2) 00:16:05.145 19.200 - 19.316: 99.2259% ( 2) 00:16:05.145 19.433 - 19.549: 99.2349% ( 1) 00:16:05.145 22.691 - 22.807: 99.2439% ( 1) 00:16:05.145 23.156 - 23.273: 99.2529% ( 1) 00:16:05.145 23.971 - 24.087: 99.2619% ( 1) 00:16:05.145 24.902 - 25.018: 99.2709% ( 1) 00:16:05.145 39.098 - 39.331: 99.2799% ( 1) 00:16:05.145 3023.593 - 3038.487: 99.2889% ( 1) 00:16:05.145 3038.487 - 3053.382: 99.3699% ( 9) 00:16:05.145 3053.382 - 3068.276: 99.3879% ( 2) 00:16:05.145 3068.276 - 3083.171: 99.3969% ( 1) 00:16:05.145 3083.171 - 3098.065: 99.4059% ( 1) 00:16:05.145 3932.160 - 3961.949: 99.4419% ( 4) 00:16:05.145 3961.949 - 3991.738: 99.5590% ( 13) 00:16:05.145 3991.738 - 4021.527: 99.8020% ( 27) 00:16:05.145 4021.527 - 4051.316: 99.8920% ( 10) 00:16:05.145 4051.316 - 4081.105: 99.9280% ( 4) 00:16:05.145 6047.185 - 6076.975: 99.9370% ( 1) 00:16:05.145 6076.975 - 6106.764: 99.9460% ( 1) 00:16:05.145 7000.436 - 7030.225: 99.9640% ( 2) 00:16:05.145 7030.225 - 7060.015: 99.9820% ( 2) 00:16:05.145 7060.015 - 7089.804: 100.0000% ( 2) 00:16:05.145 00:16:05.145 00:35:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:05.145 00:35:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:05.145 00:35:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:05.145 00:35:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:05.145 00:35:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.404 [ 00:16:05.404 { 00:16:05.404 "allow_any_host": true, 00:16:05.404 "hosts": [], 00:16:05.404 "listen_addresses": [], 00:16:05.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.404 "subtype": "Discovery" 00:16:05.404 }, 00:16:05.404 { 00:16:05.404 "allow_any_host": true, 00:16:05.404 "hosts": [], 00:16:05.404 "listen_addresses": [ 00:16:05.404 { 00:16:05.404 "adrfam": "IPv4", 00:16:05.404 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.404 "trsvcid": "0", 00:16:05.404 "trtype": "VFIOUSER" 00:16:05.404 } 00:16:05.404 ], 00:16:05.404 "max_cntlid": 65519, 00:16:05.404 "max_namespaces": 32, 00:16:05.404 "min_cntlid": 1, 00:16:05.404 "model_number": "SPDK bdev Controller", 00:16:05.404 "namespaces": [ 00:16:05.404 { 00:16:05.404 "bdev_name": "Malloc1", 00:16:05.404 "name": "Malloc1", 00:16:05.404 "nguid": "75F35B9488CC448E9A52DD456E81B62E", 00:16:05.404 "nsid": 1, 00:16:05.404 "uuid": "75f35b94-88cc-448e-9a52-dd456e81b62e" 00:16:05.404 } 00:16:05.404 ], 00:16:05.404 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.404 "serial_number": "SPDK1", 00:16:05.404 "subtype": "NVMe" 00:16:05.404 }, 00:16:05.404 { 00:16:05.404 "allow_any_host": true, 00:16:05.404 "hosts": [], 00:16:05.404 "listen_addresses": [ 00:16:05.404 { 00:16:05.404 "adrfam": "IPv4", 00:16:05.404 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.404 "trsvcid": "0", 00:16:05.404 "trtype": "VFIOUSER" 00:16:05.404 } 00:16:05.404 ], 00:16:05.404 "max_cntlid": 65519, 00:16:05.404 "max_namespaces": 32, 00:16:05.404 "min_cntlid": 1, 00:16:05.404 "model_number": "SPDK bdev Controller", 00:16:05.404 "namespaces": [ 00:16:05.404 { 00:16:05.404 "bdev_name": "Malloc2", 00:16:05.404 "name": "Malloc2", 00:16:05.404 "nguid": "CC3EDBDACB26404D8441767B4EFBFAC2", 00:16:05.404 "nsid": 1, 00:16:05.404 "uuid": "cc3edbda-cb26-404d-8441-767b4efbfac2" 00:16:05.404 } 00:16:05.404 ], 00:16:05.404 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.404 "serial_number": "SPDK2", 00:16:05.404 "subtype": "NVMe" 00:16:05.404 } 00:16:05.404 ] 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=78826 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:16:05.404 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=3 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=4 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.663 [2024-07-12 00:35:10.513597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:05.663 00:35:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:06.229 Malloc3 00:16:06.229 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:06.488 [2024-07-12 00:35:11.265732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.488 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.488 Asynchronous Event Request test 00:16:06.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.488 Registering asynchronous event callbacks... 00:16:06.488 Starting namespace attribute notice tests for all controllers... 00:16:06.488 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:06.488 aer_cb - Changed Namespace 00:16:06.488 Cleaning up... 00:16:06.747 [ 00:16:06.747 { 00:16:06.747 "allow_any_host": true, 00:16:06.747 "hosts": [], 00:16:06.747 "listen_addresses": [], 00:16:06.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.747 "subtype": "Discovery" 00:16:06.747 }, 00:16:06.747 { 00:16:06.747 "allow_any_host": true, 00:16:06.747 "hosts": [], 00:16:06.747 "listen_addresses": [ 00:16:06.747 { 00:16:06.747 "adrfam": "IPv4", 00:16:06.747 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.747 "trsvcid": "0", 00:16:06.747 "trtype": "VFIOUSER" 00:16:06.747 } 00:16:06.747 ], 00:16:06.747 "max_cntlid": 65519, 00:16:06.747 "max_namespaces": 32, 00:16:06.747 "min_cntlid": 1, 00:16:06.747 "model_number": "SPDK bdev Controller", 00:16:06.747 "namespaces": [ 00:16:06.747 { 00:16:06.747 "bdev_name": "Malloc1", 00:16:06.747 "name": "Malloc1", 00:16:06.747 "nguid": "75F35B9488CC448E9A52DD456E81B62E", 00:16:06.747 "nsid": 1, 00:16:06.747 "uuid": "75f35b94-88cc-448e-9a52-dd456e81b62e" 00:16:06.747 }, 00:16:06.747 { 00:16:06.747 "bdev_name": "Malloc3", 00:16:06.747 "name": "Malloc3", 00:16:06.747 "nguid": "AE7D105B7E8C4101BB8B5752F27227FC", 00:16:06.747 "nsid": 2, 00:16:06.747 "uuid": "ae7d105b-7e8c-4101-bb8b-5752f27227fc" 00:16:06.747 } 00:16:06.747 ], 00:16:06.747 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.747 "serial_number": "SPDK1", 00:16:06.747 "subtype": "NVMe" 00:16:06.747 }, 00:16:06.747 { 00:16:06.747 "allow_any_host": true, 00:16:06.747 "hosts": [], 00:16:06.747 "listen_addresses": [ 00:16:06.747 { 00:16:06.747 "adrfam": "IPv4", 00:16:06.747 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.747 "trsvcid": "0", 00:16:06.747 "trtype": "VFIOUSER" 00:16:06.747 } 00:16:06.747 ], 00:16:06.747 "max_cntlid": 65519, 00:16:06.747 "max_namespaces": 32, 00:16:06.747 "min_cntlid": 1, 00:16:06.747 "model_number": "SPDK bdev Controller", 00:16:06.747 "namespaces": [ 00:16:06.747 { 00:16:06.747 "bdev_name": "Malloc2", 00:16:06.747 "name": "Malloc2", 00:16:06.747 "nguid": "CC3EDBDACB26404D8441767B4EFBFAC2", 00:16:06.747 "nsid": 1, 00:16:06.747 "uuid": "cc3edbda-cb26-404d-8441-767b4efbfac2" 00:16:06.747 } 00:16:06.747 ], 00:16:06.747 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.747 "serial_number": "SPDK2", 00:16:06.747 "subtype": "NVMe" 00:16:06.747 } 00:16:06.747 ] 00:16:06.747 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 78826 00:16:06.747 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:06.747 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:06.747 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:06.747 00:35:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:06.747 [2024-07-12 00:35:11.617610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:06.747 [2024-07-12 00:35:11.617744] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78867 ] 00:16:07.007 [2024-07-12 00:35:11.784023] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:07.007 [2024-07-12 00:35:11.787096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.007 [2024-07-12 00:35:11.787160] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f45325b4000 00:16:07.007 [2024-07-12 00:35:11.788082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.789058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.790067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.791073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.792104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.793106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.794093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.795093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.007 [2024-07-12 00:35:11.796147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.007 [2024-07-12 00:35:11.796204] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f45325a9000 00:16:07.007 [2024-07-12 00:35:11.797754] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.007 [2024-07-12 00:35:11.816060] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:07.007 [2024-07-12 00:35:11.816156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:07.007 [2024-07-12 00:35:11.818305] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.007 [2024-07-12 00:35:11.818489] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:07.007 [2024-07-12 00:35:11.819226] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:07.007 [2024-07-12 00:35:11.819276] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:07.007 [2024-07-12 00:35:11.819290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:07.007 [2024-07-12 00:35:11.820471] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:07.007 [2024-07-12 00:35:11.820518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:07.007 [2024-07-12 00:35:11.820541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:07.007 [2024-07-12 00:35:11.821434] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.007 [2024-07-12 00:35:11.821480] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:07.007 [2024-07-12 00:35:11.821502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.822462] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:07.007 [2024-07-12 00:35:11.822499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.824453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:07.007 [2024-07-12 00:35:11.824509] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:07.007 [2024-07-12 00:35:11.824527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.824547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.824661] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:07.007 [2024-07-12 00:35:11.824674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.824687] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:07.007 [2024-07-12 00:35:11.825472] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:07.007 [2024-07-12 00:35:11.826464] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:07.007 [2024-07-12 00:35:11.827470] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.007 [2024-07-12 00:35:11.828469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.007 [2024-07-12 00:35:11.828615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:07.007 [2024-07-12 00:35:11.829494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:07.007 [2024-07-12 00:35:11.829538] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:07.007 [2024-07-12 00:35:11.829553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:07.007 [2024-07-12 00:35:11.829586] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:07.007 [2024-07-12 00:35:11.829604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:07.007 [2024-07-12 00:35:11.829640] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.007 [2024-07-12 00:35:11.829652] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.007 [2024-07-12 00:35:11.829683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.840448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.840515] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:07.008 [2024-07-12 00:35:11.840531] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:07.008 [2024-07-12 00:35:11.840543] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:07.008 [2024-07-12 00:35:11.840552] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:07.008 [2024-07-12 00:35:11.840564] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:07.008 [2024-07-12 00:35:11.840573] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:07.008 [2024-07-12 00:35:11.840586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.840607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.840636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.848444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.848513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.008 [2024-07-12 00:35:11.848536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.008 [2024-07-12 00:35:11.848555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.008 [2024-07-12 00:35:11.848569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.008 [2024-07-12 00:35:11.848581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.848605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.848625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.856435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.856482] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:07.008 [2024-07-12 00:35:11.856497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.856516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.856528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.856551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.867441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.867607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.867678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.867706] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:07.008 [2024-07-12 00:35:11.867718] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:07.008 [2024-07-12 00:35:11.867744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.874420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.874525] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:07.008 [2024-07-12 00:35:11.874550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.874582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.874604] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.008 [2024-07-12 00:35:11.874618] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.008 [2024-07-12 00:35:11.874633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.883474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.883624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.883661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.883698] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.008 [2024-07-12 00:35:11.883715] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.008 [2024-07-12 00:35:11.883736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.891429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.891502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891608] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:07.008 [2024-07-12 00:35:11.891621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:07.008 [2024-07-12 00:35:11.891641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:07.008 [2024-07-12 00:35:11.891704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.899420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.899479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.907418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.907475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.915453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.915520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.926481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.926549] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:07.008 [2024-07-12 00:35:11.926564] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:07.008 [2024-07-12 00:35:11.926575] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:07.008 [2024-07-12 00:35:11.926583] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:07.008 [2024-07-12 00:35:11.926601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:07.008 [2024-07-12 00:35:11.926619] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:07.008 [2024-07-12 00:35:11.926632] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:07.008 [2024-07-12 00:35:11.926644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.926668] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:07.008 [2024-07-12 00:35:11.926679] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.008 [2024-07-12 00:35:11.926695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.926722] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:07.008 [2024-07-12 00:35:11.926737] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:07.008 [2024-07-12 00:35:11.926752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:07.008 [2024-07-12 00:35:11.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.934487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.934510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:07.008 [2024-07-12 00:35:11.934524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:07.008 ===================================================== 00:16:07.008 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.008 ===================================================== 00:16:07.008 Controller Capabilities/Features 00:16:07.008 ================================ 00:16:07.008 Vendor ID: 4e58 00:16:07.008 Subsystem Vendor ID: 4e58 00:16:07.008 Serial Number: SPDK2 00:16:07.008 Model Number: SPDK bdev Controller 00:16:07.008 Firmware Version: 24.09 00:16:07.008 Recommended Arb Burst: 6 00:16:07.008 IEEE OUI Identifier: 8d 6b 50 00:16:07.008 Multi-path I/O 00:16:07.008 May have multiple subsystem ports: Yes 00:16:07.008 May have multiple controllers: Yes 00:16:07.008 Associated with SR-IOV VF: No 00:16:07.008 Max Data Transfer Size: 131072 00:16:07.008 Max Number of Namespaces: 32 00:16:07.008 Max Number of I/O Queues: 127 00:16:07.009 NVMe Specification Version (VS): 1.3 00:16:07.009 NVMe Specification Version (Identify): 1.3 00:16:07.009 Maximum Queue Entries: 256 00:16:07.009 Contiguous Queues Required: Yes 00:16:07.009 Arbitration Mechanisms Supported 00:16:07.009 Weighted Round Robin: Not Supported 00:16:07.009 Vendor Specific: Not Supported 00:16:07.009 Reset Timeout: 15000 ms 00:16:07.009 Doorbell Stride: 4 bytes 00:16:07.009 NVM Subsystem Reset: Not Supported 00:16:07.009 Command Sets Supported 00:16:07.009 NVM Command Set: Supported 00:16:07.009 Boot Partition: Not Supported 00:16:07.009 Memory Page Size Minimum: 4096 bytes 00:16:07.009 Memory Page Size Maximum: 4096 bytes 00:16:07.009 Persistent Memory Region: Not Supported 00:16:07.009 Optional Asynchronous Events Supported 00:16:07.009 Namespace Attribute Notices: Supported 00:16:07.009 Firmware Activation Notices: Not Supported 00:16:07.009 ANA Change Notices: Not Supported 00:16:07.009 PLE Aggregate Log Change Notices: Not Supported 00:16:07.009 LBA Status Info Alert Notices: Not Supported 00:16:07.009 EGE Aggregate Log Change Notices: Not Supported 00:16:07.009 Normal NVM Subsystem Shutdown event: Not Supported 00:16:07.009 Zone Descriptor Change Notices: Not Supported 00:16:07.009 Discovery Log Change Notices: Not Supported 00:16:07.009 Controller Attributes 00:16:07.009 128-bit Host Identifier: Supported 00:16:07.009 Non-Operational Permissive Mode: Not Supported 00:16:07.009 NVM Sets: Not Supported 00:16:07.009 Read Recovery Levels: Not Supported 00:16:07.009 Endurance Groups: Not Supported 00:16:07.009 Predictable Latency Mode: Not Supported 00:16:07.009 Traffic Based Keep ALive: Not Supported 00:16:07.009 Namespace Granularity: Not Supported 00:16:07.009 SQ Associations: Not Supported 00:16:07.009 UUID List: Not Supported 00:16:07.009 Multi-Domain Subsystem: Not Supported 00:16:07.009 Fixed Capacity Management: Not Supported 00:16:07.009 Variable Capacity Management: Not Supported 00:16:07.009 Delete Endurance Group: Not Supported 00:16:07.009 Delete NVM Set: Not Supported 00:16:07.009 Extended LBA Formats Supported: Not Supported 00:16:07.009 Flexible Data Placement Supported: Not Supported 00:16:07.009 00:16:07.009 Controller Memory Buffer Support 00:16:07.009 ================================ 00:16:07.009 Supported: No 00:16:07.009 00:16:07.009 Persistent Memory Region Support 00:16:07.009 ================================ 00:16:07.009 Supported: No 00:16:07.009 00:16:07.009 Admin Command Set Attributes 00:16:07.009 ============================ 00:16:07.009 Security Send/Receive: Not Supported 00:16:07.009 Format NVM: Not Supported 00:16:07.009 Firmware Activate/Download: Not Supported 00:16:07.009 Namespace Management: Not Supported 00:16:07.009 Device Self-Test: Not Supported 00:16:07.009 Directives: Not Supported 00:16:07.009 NVMe-MI: Not Supported 00:16:07.009 Virtualization Management: Not Supported 00:16:07.009 Doorbell Buffer Config: Not Supported 00:16:07.009 Get LBA Status Capability: Not Supported 00:16:07.009 Command & Feature Lockdown Capability: Not Supported 00:16:07.009 Abort Command Limit: 4 00:16:07.009 Async Event Request Limit: 4 00:16:07.009 Number of Firmware Slots: N/A 00:16:07.009 Firmware Slot 1 Read-Only: N/A 00:16:07.009 Firmware Activation Without Reset: N/A 00:16:07.009 Multiple Update Detection Support: N/A 00:16:07.009 Firmware Update Granularity: No Information Provided 00:16:07.009 Per-Namespace SMART Log: No 00:16:07.009 Asymmetric Namespace Access Log Page: Not Supported 00:16:07.009 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:07.009 Command Effects Log Page: Supported 00:16:07.009 Get Log Page Extended Data: Supported 00:16:07.009 Telemetry Log Pages: Not Supported 00:16:07.009 Persistent Event Log Pages: Not Supported 00:16:07.009 Supported Log Pages Log Page: May Support 00:16:07.009 Commands Supported & Effects Log Page: Not Supported 00:16:07.009 Feature Identifiers & Effects Log Page:May Support 00:16:07.009 NVMe-MI Commands & Effects Log Page: May Support 00:16:07.009 Data Area 4 for Telemetry Log: Not Supported 00:16:07.009 Error Log Page Entries Supported: 128 00:16:07.009 Keep Alive: Supported 00:16:07.009 Keep Alive Granularity: 10000 ms 00:16:07.009 00:16:07.009 NVM Command Set Attributes 00:16:07.009 ========================== 00:16:07.009 Submission Queue Entry Size 00:16:07.009 Max: 64 00:16:07.009 Min: 64 00:16:07.009 Completion Queue Entry Size 00:16:07.009 Max: 16 00:16:07.009 Min: 16 00:16:07.009 Number of Namespaces: 32 00:16:07.009 Compare Command: Supported 00:16:07.009 Write Uncorrectable Command: Not Supported 00:16:07.009 Dataset Management Command: Supported 00:16:07.009 Write Zeroes Command: Supported 00:16:07.009 Set Features Save Field: Not Supported 00:16:07.009 Reservations: Not Supported 00:16:07.009 Timestamp: Not Supported 00:16:07.009 Copy: Supported 00:16:07.009 Volatile Write Cache: Present 00:16:07.009 Atomic Write Unit (Normal): 1 00:16:07.009 Atomic Write Unit (PFail): 1 00:16:07.009 Atomic Compare & Write Unit: 1 00:16:07.009 Fused Compare & Write: Supported 00:16:07.009 Scatter-Gather List 00:16:07.009 SGL Command Set: Supported (Dword aligned) 00:16:07.009 SGL Keyed: Not Supported 00:16:07.009 SGL Bit Bucket Descriptor: Not Supported 00:16:07.009 SGL Metadata Pointer: Not Supported 00:16:07.009 Oversized SGL: Not Supported 00:16:07.009 SGL Metadata Address: Not Supported 00:16:07.009 SGL Offset: Not Supported 00:16:07.009 Transport SGL Data Block: Not Supported 00:16:07.009 Replay Protected Memory Block: Not Supported 00:16:07.009 00:16:07.009 Firmware Slot Information 00:16:07.009 ========================= 00:16:07.009 Active slot: 1 00:16:07.009 Slot 1 Firmware Revision: 24.09 00:16:07.009 00:16:07.009 00:16:07.009 Commands Supported and Effects 00:16:07.009 ============================== 00:16:07.009 Admin Commands 00:16:07.009 -------------- 00:16:07.009 Get Log Page (02h): Supported 00:16:07.009 Identify (06h): Supported 00:16:07.009 Abort (08h): Supported 00:16:07.009 Set Features (09h): Supported 00:16:07.009 Get Features (0Ah): Supported 00:16:07.009 Asynchronous Event Request (0Ch): Supported 00:16:07.009 Keep Alive (18h): Supported 00:16:07.009 I/O Commands 00:16:07.009 ------------ 00:16:07.009 Flush (00h): Supported LBA-Change 00:16:07.009 Write (01h): Supported LBA-Change 00:16:07.009 Read (02h): Supported 00:16:07.009 Compare (05h): Supported 00:16:07.009 Write Zeroes (08h): Supported LBA-Change 00:16:07.009 Dataset Management (09h): Supported LBA-Change 00:16:07.009 Copy (19h): Supported LBA-Change 00:16:07.009 00:16:07.009 Error Log 00:16:07.009 ========= 00:16:07.009 00:16:07.009 Arbitration 00:16:07.009 =========== 00:16:07.009 Arbitration Burst: 1 00:16:07.009 00:16:07.009 Power Management 00:16:07.009 ================ 00:16:07.009 Number of Power States: 1 00:16:07.009 Current Power State: Power State #0 00:16:07.009 Power State #0: 00:16:07.009 Max Power: 0.00 W 00:16:07.009 Non-Operational State: Operational 00:16:07.009 Entry Latency: Not Reported 00:16:07.009 Exit Latency: Not Reported 00:16:07.009 Relative Read Throughput: 0 00:16:07.009 Relative Read Latency: 0 00:16:07.009 Relative Write Throughput: 0 00:16:07.009 Relative Write Latency: 0 00:16:07.009 Idle Power: Not Reported 00:16:07.009 Active Power: Not Reported 00:16:07.009 Non-Operational Permissive Mode: Not Supported 00:16:07.009 00:16:07.009 Health Information 00:16:07.009 ================== 00:16:07.009 Critical Warnings: 00:16:07.009 Available Spare Space: OK 00:16:07.009 Temperature: OK 00:16:07.009 Device Reliability: OK 00:16:07.009 Read Only: No 00:16:07.009 Volatile Memory Backup: OK 00:16:07.009 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:07.009 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:07.009 Available Spare: 0% 00:16:07.009 Available Sp[2024-07-12 00:35:11.934723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:07.268 [2024-07-12 00:35:11.942416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:07.268 [2024-07-12 00:35:11.942551] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:07.268 [2024-07-12 00:35:11.942576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.268 [2024-07-12 00:35:11.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.268 [2024-07-12 00:35:11.942609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.268 [2024-07-12 00:35:11.942623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.268 [2024-07-12 00:35:11.942764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.268 [2024-07-12 00:35:11.942800] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:07.268 [2024-07-12 00:35:11.943780] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.268 [2024-07-12 00:35:11.943914] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:07.268 [2024-07-12 00:35:11.943940] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:07.268 [2024-07-12 00:35:11.944754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:07.268 [2024-07-12 00:35:11.944810] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:07.268 [2024-07-12 00:35:11.945609] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:07.268 [2024-07-12 00:35:11.947038] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.268 are Threshold: 0% 00:16:07.268 Life Percentage Used: 0% 00:16:07.268 Data Units Read: 0 00:16:07.268 Data Units Written: 0 00:16:07.268 Host Read Commands: 0 00:16:07.268 Host Write Commands: 0 00:16:07.268 Controller Busy Time: 0 minutes 00:16:07.268 Power Cycles: 0 00:16:07.268 Power On Hours: 0 hours 00:16:07.268 Unsafe Shutdowns: 0 00:16:07.268 Unrecoverable Media Errors: 0 00:16:07.268 Lifetime Error Log Entries: 0 00:16:07.268 Warning Temperature Time: 0 minutes 00:16:07.268 Critical Temperature Time: 0 minutes 00:16:07.268 00:16:07.268 Number of Queues 00:16:07.268 ================ 00:16:07.268 Number of I/O Submission Queues: 127 00:16:07.268 Number of I/O Completion Queues: 127 00:16:07.268 00:16:07.268 Active Namespaces 00:16:07.268 ================= 00:16:07.268 Namespace ID:1 00:16:07.268 Error Recovery Timeout: Unlimited 00:16:07.268 Command Set Identifier: NVM (00h) 00:16:07.268 Deallocate: Supported 00:16:07.268 Deallocated/Unwritten Error: Not Supported 00:16:07.268 Deallocated Read Value: Unknown 00:16:07.268 Deallocate in Write Zeroes: Not Supported 00:16:07.268 Deallocated Guard Field: 0xFFFF 00:16:07.268 Flush: Supported 00:16:07.268 Reservation: Supported 00:16:07.268 Namespace Sharing Capabilities: Multiple Controllers 00:16:07.268 Size (in LBAs): 131072 (0GiB) 00:16:07.268 Capacity (in LBAs): 131072 (0GiB) 00:16:07.268 Utilization (in LBAs): 131072 (0GiB) 00:16:07.268 NGUID: CC3EDBDACB26404D8441767B4EFBFAC2 00:16:07.268 UUID: cc3edbda-cb26-404d-8441-767b4efbfac2 00:16:07.268 Thin Provisioning: Not Supported 00:16:07.268 Per-NS Atomic Units: Yes 00:16:07.268 Atomic Boundary Size (Normal): 0 00:16:07.268 Atomic Boundary Size (PFail): 0 00:16:07.268 Atomic Boundary Offset: 0 00:16:07.268 Maximum Single Source Range Length: 65535 00:16:07.268 Maximum Copy Length: 65535 00:16:07.268 Maximum Source Range Count: 1 00:16:07.268 NGUID/EUI64 Never Reused: No 00:16:07.268 Namespace Write Protected: No 00:16:07.268 Number of LBA Formats: 1 00:16:07.268 Current LBA Format: LBA Format #00 00:16:07.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:07.268 00:16:07.268 00:35:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:07.526 [2024-07-12 00:35:12.391850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.819 Initializing NVMe Controllers 00:16:12.819 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.819 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.819 Initialization complete. Launching workers. 00:16:12.819 ======================================================== 00:16:12.819 Latency(us) 00:16:12.819 Device Information : IOPS MiB/s Average min max 00:16:12.819 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25053.10 97.86 5110.66 1373.20 11951.88 00:16:12.819 ======================================================== 00:16:12.819 Total : 25053.10 97.86 5110.66 1373.20 11951.88 00:16:12.819 00:16:12.819 [2024-07-12 00:35:17.495917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.819 00:35:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:13.076 [2024-07-12 00:35:17.962234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.334 Initializing NVMe Controllers 00:16:18.334 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:18.334 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:18.334 Initialization complete. Launching workers. 00:16:18.334 ======================================================== 00:16:18.334 Latency(us) 00:16:18.334 Device Information : IOPS MiB/s Average min max 00:16:18.334 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24833.00 97.00 5154.17 1381.30 10980.60 00:16:18.334 ======================================================== 00:16:18.334 Total : 24833.00 97.00 5154.17 1381.30 10980.60 00:16:18.334 00:16:18.334 [2024-07-12 00:35:22.980412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.334 00:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:18.609 [2024-07-12 00:35:23.394875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.922 [2024-07-12 00:35:28.551614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.922 Initializing NVMe Controllers 00:16:23.922 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:23.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:23.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:23.922 Initialization complete. Launching workers. 00:16:23.922 Starting thread on core 2 00:16:23.922 Starting thread on core 3 00:16:23.922 Starting thread on core 1 00:16:23.922 00:35:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:24.181 [2024-07-12 00:35:29.040822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.467 [2024-07-12 00:35:32.214581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.467 Initializing NVMe Controllers 00:16:27.467 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.467 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:27.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:27.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:27.467 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:27.467 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:16:27.467 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:27.467 Initialization complete. Launching workers. 00:16:27.467 Starting thread on core 1 with urgent priority queue 00:16:27.467 Starting thread on core 2 with urgent priority queue 00:16:27.467 Starting thread on core 3 with urgent priority queue 00:16:27.467 Starting thread on core 0 with urgent priority queue 00:16:27.467 SPDK bdev Controller (SPDK2 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:16:27.467 SPDK bdev Controller (SPDK2 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:16:27.467 SPDK bdev Controller (SPDK2 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:16:27.467 SPDK bdev Controller (SPDK2 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:16:27.467 ======================================================== 00:16:27.467 00:16:27.467 00:35:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:28.034 [2024-07-12 00:35:32.696309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:28.034 Initializing NVMe Controllers 00:16:28.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.034 Namespace ID: 1 size: 0GB 00:16:28.034 Initialization complete. 00:16:28.034 INFO: using host memory buffer for IO 00:16:28.034 Hello world! 00:16:28.034 [2024-07-12 00:35:32.710278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:28.034 00:35:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:28.291 [2024-07-12 00:35:33.185327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:29.668 Initializing NVMe Controllers 00:16:29.668 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.668 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.668 Initialization complete. Launching workers. 00:16:29.668 submit (in ns) avg, min, max = 5639.3, 3848.2, 7070947.3 00:16:29.668 complete (in ns) avg, min, max = 33494.8, 2259.1, 7048003.6 00:16:29.668 00:16:29.668 Submit histogram 00:16:29.668 ================ 00:16:29.668 Range in us Cumulative Count 00:16:29.668 3.840 - 3.869: 0.1502% ( 16) 00:16:29.668 3.869 - 3.898: 1.4079% ( 134) 00:16:29.668 3.898 - 3.927: 3.6231% ( 236) 00:16:29.668 3.927 - 3.956: 5.2656% ( 175) 00:16:29.668 3.956 - 3.985: 7.6028% ( 249) 00:16:29.668 3.985 - 4.015: 11.0757% ( 370) 00:16:29.668 4.015 - 4.044: 14.5016% ( 365) 00:16:29.668 4.044 - 4.073: 17.2611% ( 294) 00:16:29.668 4.073 - 4.102: 20.5932% ( 355) 00:16:29.668 4.102 - 4.131: 24.7700% ( 445) 00:16:29.668 4.131 - 4.160: 28.7967% ( 429) 00:16:29.668 4.160 - 4.189: 32.2039% ( 363) 00:16:29.668 4.189 - 4.218: 36.4370% ( 451) 00:16:29.668 4.218 - 4.247: 42.6788% ( 665) 00:16:29.668 4.247 - 4.276: 50.9198% ( 878) 00:16:29.668 4.276 - 4.305: 57.9782% ( 752) 00:16:29.668 4.305 - 4.335: 64.4453% ( 689) 00:16:29.668 4.335 - 4.364: 69.8329% ( 574) 00:16:29.668 4.364 - 4.393: 73.7751% ( 420) 00:16:29.668 4.393 - 4.422: 76.9195% ( 335) 00:16:29.668 4.422 - 4.451: 79.0783% ( 230) 00:16:29.668 4.451 - 4.480: 81.0963% ( 215) 00:16:29.668 4.480 - 4.509: 82.9829% ( 201) 00:16:29.668 4.509 - 4.538: 84.3627% ( 147) 00:16:29.668 4.538 - 4.567: 85.5641% ( 128) 00:16:29.668 4.567 - 4.596: 86.8312% ( 135) 00:16:29.668 4.596 - 4.625: 88.0608% ( 131) 00:16:29.668 4.625 - 4.655: 89.0933% ( 110) 00:16:29.668 4.655 - 4.684: 89.9099% ( 87) 00:16:29.668 4.684 - 4.713: 90.6045% ( 74) 00:16:29.668 4.713 - 4.742: 91.1676% ( 60) 00:16:29.668 4.742 - 4.771: 91.5806% ( 44) 00:16:29.668 4.771 - 4.800: 91.9936% ( 44) 00:16:29.668 4.800 - 4.829: 92.2940% ( 32) 00:16:29.668 4.829 - 4.858: 92.5849% ( 31) 00:16:29.668 4.858 - 4.887: 92.8665% ( 30) 00:16:29.668 4.887 - 4.916: 93.1763% ( 33) 00:16:29.668 4.916 - 4.945: 93.4297% ( 27) 00:16:29.668 4.945 - 4.975: 93.6268% ( 21) 00:16:29.668 4.975 - 5.004: 93.7864% ( 17) 00:16:29.668 5.004 - 5.033: 93.9553% ( 18) 00:16:29.668 5.033 - 5.062: 94.0773% ( 13) 00:16:29.668 5.062 - 5.091: 94.1524% ( 8) 00:16:29.668 5.091 - 5.120: 94.2463% ( 10) 00:16:29.668 5.120 - 5.149: 94.2932% ( 5) 00:16:29.668 5.149 - 5.178: 94.3214% ( 3) 00:16:29.668 5.178 - 5.207: 94.3777% ( 6) 00:16:29.668 5.207 - 5.236: 94.4340% ( 6) 00:16:29.668 5.236 - 5.265: 94.4903% ( 6) 00:16:29.668 5.265 - 5.295: 94.5373% ( 5) 00:16:29.668 5.295 - 5.324: 94.5842% ( 5) 00:16:29.668 5.324 - 5.353: 94.6217% ( 4) 00:16:29.668 5.353 - 5.382: 94.6781% ( 6) 00:16:29.668 5.382 - 5.411: 94.6874% ( 1) 00:16:29.668 5.411 - 5.440: 94.6968% ( 1) 00:16:29.668 5.440 - 5.469: 94.7062% ( 1) 00:16:29.668 5.498 - 5.527: 94.7344% ( 3) 00:16:29.668 5.527 - 5.556: 94.7625% ( 3) 00:16:29.668 5.585 - 5.615: 94.7813% ( 2) 00:16:29.668 5.615 - 5.644: 94.8001% ( 2) 00:16:29.668 5.644 - 5.673: 94.8095% ( 1) 00:16:29.668 5.673 - 5.702: 94.8470% ( 4) 00:16:29.668 5.702 - 5.731: 94.8658% ( 2) 00:16:29.668 5.760 - 5.789: 94.9033% ( 4) 00:16:29.668 5.818 - 5.847: 94.9221% ( 2) 00:16:29.668 5.847 - 5.876: 94.9690% ( 5) 00:16:29.668 5.876 - 5.905: 95.0066% ( 4) 00:16:29.668 5.905 - 5.935: 95.0629% ( 6) 00:16:29.668 5.935 - 5.964: 95.1004% ( 4) 00:16:29.668 5.964 - 5.993: 95.1380% ( 4) 00:16:29.668 5.993 - 6.022: 95.1567% ( 2) 00:16:29.668 6.022 - 6.051: 95.2225% ( 7) 00:16:29.668 6.051 - 6.080: 95.2694% ( 5) 00:16:29.668 6.080 - 6.109: 95.3445% ( 8) 00:16:29.668 6.109 - 6.138: 95.3820% ( 4) 00:16:29.668 6.138 - 6.167: 95.3914% ( 1) 00:16:29.668 6.167 - 6.196: 95.4102% ( 2) 00:16:29.668 6.196 - 6.225: 95.4477% ( 4) 00:16:29.668 6.225 - 6.255: 95.4853% ( 4) 00:16:29.668 6.255 - 6.284: 95.5228% ( 4) 00:16:29.668 6.284 - 6.313: 95.5510% ( 3) 00:16:29.668 6.313 - 6.342: 95.5885% ( 4) 00:16:29.668 6.342 - 6.371: 95.6354% ( 5) 00:16:29.668 6.371 - 6.400: 95.7011% ( 7) 00:16:29.668 6.400 - 6.429: 95.7575% ( 6) 00:16:29.668 6.429 - 6.458: 95.7668% ( 1) 00:16:29.668 6.458 - 6.487: 95.7950% ( 3) 00:16:29.668 6.487 - 6.516: 95.8419% ( 5) 00:16:29.668 6.516 - 6.545: 95.8607% ( 2) 00:16:29.668 6.545 - 6.575: 95.8701% ( 1) 00:16:29.668 6.575 - 6.604: 95.8795% ( 1) 00:16:29.668 6.604 - 6.633: 95.8983% ( 2) 00:16:29.668 6.633 - 6.662: 95.9264% ( 3) 00:16:29.668 6.662 - 6.691: 95.9452% ( 2) 00:16:29.668 6.691 - 6.720: 95.9640% ( 2) 00:16:29.668 6.720 - 6.749: 95.9733% ( 1) 00:16:29.668 6.749 - 6.778: 95.9921% ( 2) 00:16:29.668 6.778 - 6.807: 96.0203% ( 3) 00:16:29.668 6.807 - 6.836: 96.0390% ( 2) 00:16:29.668 6.836 - 6.865: 96.0672% ( 3) 00:16:29.668 6.865 - 6.895: 96.0766% ( 1) 00:16:29.668 6.895 - 6.924: 96.0860% ( 1) 00:16:29.668 6.924 - 6.953: 96.0954% ( 1) 00:16:29.668 6.953 - 6.982: 96.1141% ( 2) 00:16:29.668 6.982 - 7.011: 96.1329% ( 2) 00:16:29.668 7.011 - 7.040: 96.1517% ( 2) 00:16:29.668 7.040 - 7.069: 96.1705% ( 2) 00:16:29.668 7.069 - 7.098: 96.1892% ( 2) 00:16:29.668 7.098 - 7.127: 96.2080% ( 2) 00:16:29.668 7.127 - 7.156: 96.2268% ( 2) 00:16:29.668 7.156 - 7.185: 96.2362% ( 1) 00:16:29.668 7.185 - 7.215: 96.2549% ( 2) 00:16:29.668 7.215 - 7.244: 96.2737% ( 2) 00:16:29.668 7.244 - 7.273: 96.2925% ( 2) 00:16:29.668 7.273 - 7.302: 96.3206% ( 3) 00:16:29.668 7.302 - 7.331: 96.3394% ( 2) 00:16:29.668 7.331 - 7.360: 96.3582% ( 2) 00:16:29.668 7.360 - 7.389: 96.3863% ( 3) 00:16:29.668 7.389 - 7.418: 96.4051% ( 2) 00:16:29.668 7.418 - 7.447: 96.4145% ( 1) 00:16:29.668 7.447 - 7.505: 96.4614% ( 5) 00:16:29.668 7.505 - 7.564: 96.5084% ( 5) 00:16:29.668 7.564 - 7.622: 96.5459% ( 4) 00:16:29.668 7.622 - 7.680: 96.5928% ( 5) 00:16:29.668 7.680 - 7.738: 96.6867% ( 10) 00:16:29.668 7.738 - 7.796: 96.8087% ( 13) 00:16:29.668 7.796 - 7.855: 96.8744% ( 7) 00:16:29.668 7.855 - 7.913: 96.9120% ( 4) 00:16:29.668 7.913 - 7.971: 96.9683% ( 6) 00:16:29.668 7.971 - 8.029: 96.9870% ( 2) 00:16:29.668 8.029 - 8.087: 97.0621% ( 8) 00:16:29.668 8.087 - 8.145: 97.1185% ( 6) 00:16:29.668 8.145 - 8.204: 97.1654% ( 5) 00:16:29.668 8.204 - 8.262: 97.1748% ( 1) 00:16:29.668 8.262 - 8.320: 97.2029% ( 3) 00:16:29.668 8.320 - 8.378: 97.2311% ( 3) 00:16:29.668 8.378 - 8.436: 97.2405% ( 1) 00:16:29.668 8.436 - 8.495: 97.2780% ( 4) 00:16:29.668 8.495 - 8.553: 97.2968% ( 2) 00:16:29.668 8.553 - 8.611: 97.3156% ( 2) 00:16:29.668 8.611 - 8.669: 97.3343% ( 2) 00:16:29.668 8.669 - 8.727: 97.3719% ( 4) 00:16:29.668 8.844 - 8.902: 97.4000% ( 3) 00:16:29.668 8.902 - 8.960: 97.4094% ( 1) 00:16:29.668 8.960 - 9.018: 97.4282% ( 2) 00:16:29.668 9.018 - 9.076: 97.4376% ( 1) 00:16:29.669 9.076 - 9.135: 97.4564% ( 2) 00:16:29.669 9.135 - 9.193: 97.4939% ( 4) 00:16:29.669 9.193 - 9.251: 97.5502% ( 6) 00:16:29.669 9.251 - 9.309: 97.5784% ( 3) 00:16:29.669 9.309 - 9.367: 97.6347% ( 6) 00:16:29.669 9.367 - 9.425: 97.6441% ( 1) 00:16:29.669 9.425 - 9.484: 97.6816% ( 4) 00:16:29.669 9.542 - 9.600: 97.7473% ( 7) 00:16:29.669 9.658 - 9.716: 97.7755% ( 3) 00:16:29.669 9.716 - 9.775: 97.7943% ( 2) 00:16:29.669 9.775 - 9.833: 97.8318% ( 4) 00:16:29.669 9.833 - 9.891: 97.8506% ( 2) 00:16:29.669 9.891 - 9.949: 97.8693% ( 2) 00:16:29.669 9.949 - 10.007: 97.8881% ( 2) 00:16:29.669 10.007 - 10.065: 97.9163% ( 3) 00:16:29.669 10.065 - 10.124: 97.9257% ( 1) 00:16:29.669 10.124 - 10.182: 97.9350% ( 1) 00:16:29.669 10.182 - 10.240: 97.9726% ( 4) 00:16:29.669 10.240 - 10.298: 97.9914% ( 2) 00:16:29.669 10.298 - 10.356: 98.0008% ( 1) 00:16:29.669 10.356 - 10.415: 98.0289% ( 3) 00:16:29.669 10.415 - 10.473: 98.0477% ( 2) 00:16:29.669 10.531 - 10.589: 98.0571% ( 1) 00:16:29.669 10.589 - 10.647: 98.0852% ( 3) 00:16:29.669 10.647 - 10.705: 98.1040% ( 2) 00:16:29.669 10.705 - 10.764: 98.1134% ( 1) 00:16:29.669 10.764 - 10.822: 98.1509% ( 4) 00:16:29.669 10.822 - 10.880: 98.1697% ( 2) 00:16:29.669 10.880 - 10.938: 98.1885% ( 2) 00:16:29.669 10.938 - 10.996: 98.2072% ( 2) 00:16:29.669 11.113 - 11.171: 98.2166% ( 1) 00:16:29.669 11.171 - 11.229: 98.2354% ( 2) 00:16:29.669 11.229 - 11.287: 98.2823% ( 5) 00:16:29.669 11.287 - 11.345: 98.3199% ( 4) 00:16:29.669 11.345 - 11.404: 98.3293% ( 1) 00:16:29.669 11.404 - 11.462: 98.3574% ( 3) 00:16:29.669 11.462 - 11.520: 98.4044% ( 5) 00:16:29.669 11.520 - 11.578: 98.4325% ( 3) 00:16:29.669 11.578 - 11.636: 98.4419% ( 1) 00:16:29.669 11.636 - 11.695: 98.4607% ( 2) 00:16:29.669 11.695 - 11.753: 98.4888% ( 3) 00:16:29.669 11.811 - 11.869: 98.4982% ( 1) 00:16:29.669 11.869 - 11.927: 98.5076% ( 1) 00:16:29.669 11.927 - 11.985: 98.5170% ( 1) 00:16:29.669 11.985 - 12.044: 98.5264% ( 1) 00:16:29.669 12.044 - 12.102: 98.5451% ( 2) 00:16:29.669 12.102 - 12.160: 98.5545% ( 1) 00:16:29.669 12.160 - 12.218: 98.5827% ( 3) 00:16:29.669 12.218 - 12.276: 98.6109% ( 3) 00:16:29.669 12.276 - 12.335: 98.6202% ( 1) 00:16:29.669 12.335 - 12.393: 98.6484% ( 3) 00:16:29.669 12.393 - 12.451: 98.6578% ( 1) 00:16:29.669 12.451 - 12.509: 98.6672% ( 1) 00:16:29.669 12.509 - 12.567: 98.6859% ( 2) 00:16:29.669 12.567 - 12.625: 98.7141% ( 3) 00:16:29.669 12.684 - 12.742: 98.7329% ( 2) 00:16:29.669 12.742 - 12.800: 98.7610% ( 3) 00:16:29.669 12.800 - 12.858: 98.7704% ( 1) 00:16:29.669 12.858 - 12.916: 98.7986% ( 3) 00:16:29.669 12.916 - 12.975: 98.8173% ( 2) 00:16:29.669 13.033 - 13.091: 98.8455% ( 3) 00:16:29.669 13.091 - 13.149: 98.8643% ( 2) 00:16:29.669 13.207 - 13.265: 98.8830% ( 2) 00:16:29.669 13.324 - 13.382: 98.8924% ( 1) 00:16:29.669 13.440 - 13.498: 98.9018% ( 1) 00:16:29.669 13.498 - 13.556: 98.9112% ( 1) 00:16:29.669 13.556 - 13.615: 98.9206% ( 1) 00:16:29.669 13.615 - 13.673: 98.9394% ( 2) 00:16:29.669 13.673 - 13.731: 98.9581% ( 2) 00:16:29.669 13.731 - 13.789: 98.9769% ( 2) 00:16:29.669 13.789 - 13.847: 98.9863% ( 1) 00:16:29.669 13.847 - 13.905: 98.9957% ( 1) 00:16:29.669 13.905 - 13.964: 99.0051% ( 1) 00:16:29.669 13.964 - 14.022: 99.0145% ( 1) 00:16:29.669 14.022 - 14.080: 99.0332% ( 2) 00:16:29.669 14.138 - 14.196: 99.0520% ( 2) 00:16:29.669 14.255 - 14.313: 99.0614% ( 1) 00:16:29.669 14.313 - 14.371: 99.0708% ( 1) 00:16:29.669 14.371 - 14.429: 99.0895% ( 2) 00:16:29.669 14.487 - 14.545: 99.0989% ( 1) 00:16:29.669 14.662 - 14.720: 99.1083% ( 1) 00:16:29.669 14.778 - 14.836: 99.1271% ( 2) 00:16:29.669 15.244 - 15.360: 99.1365% ( 1) 00:16:29.669 15.360 - 15.476: 99.1552% ( 2) 00:16:29.669 15.593 - 15.709: 99.1646% ( 1) 00:16:29.669 15.709 - 15.825: 99.1834% ( 2) 00:16:29.669 15.825 - 15.942: 99.1928% ( 1) 00:16:29.669 16.058 - 16.175: 99.2022% ( 1) 00:16:29.669 16.291 - 16.407: 99.2209% ( 2) 00:16:29.669 16.524 - 16.640: 99.2397% ( 2) 00:16:29.669 16.640 - 16.756: 99.2491% ( 1) 00:16:29.669 17.338 - 17.455: 99.2585% ( 1) 00:16:29.669 17.920 - 18.036: 99.2679% ( 1) 00:16:29.669 18.385 - 18.502: 99.2773% ( 1) 00:16:29.669 18.502 - 18.618: 99.3242% ( 5) 00:16:29.669 18.618 - 18.735: 99.3430% ( 2) 00:16:29.669 18.735 - 18.851: 99.3899% ( 5) 00:16:29.669 18.851 - 18.967: 99.4650% ( 8) 00:16:29.669 18.967 - 19.084: 99.5213% ( 6) 00:16:29.669 19.084 - 19.200: 99.5682% ( 5) 00:16:29.669 19.200 - 19.316: 99.5870% ( 2) 00:16:29.669 19.316 - 19.433: 99.6058% ( 2) 00:16:29.669 19.433 - 19.549: 99.6809% ( 8) 00:16:29.669 19.549 - 19.665: 99.6903% ( 1) 00:16:29.669 19.665 - 19.782: 99.7090% ( 2) 00:16:29.669 19.782 - 19.898: 99.7372% ( 3) 00:16:29.669 19.898 - 20.015: 99.7466% ( 1) 00:16:29.669 20.015 - 20.131: 99.7653% ( 2) 00:16:29.669 20.131 - 20.247: 99.7935% ( 3) 00:16:29.669 20.247 - 20.364: 99.8217% ( 3) 00:16:29.669 20.364 - 20.480: 99.8310% ( 1) 00:16:29.669 20.480 - 20.596: 99.8592% ( 3) 00:16:29.669 20.596 - 20.713: 99.8874% ( 3) 00:16:29.669 20.713 - 20.829: 99.8968% ( 1) 00:16:29.669 21.178 - 21.295: 99.9155% ( 2) 00:16:29.669 21.295 - 21.411: 99.9249% ( 1) 00:16:29.669 21.411 - 21.527: 99.9343% ( 1) 00:16:29.669 22.109 - 22.225: 99.9437% ( 1) 00:16:29.669 23.505 - 23.622: 99.9531% ( 1) 00:16:29.669 26.065 - 26.182: 99.9625% ( 1) 00:16:29.669 32.116 - 32.349: 99.9718% ( 1) 00:16:29.669 43.055 - 43.287: 99.9812% ( 1) 00:16:29.669 3991.738 - 4021.527: 99.9906% ( 1) 00:16:29.669 7060.015 - 7089.804: 100.0000% ( 1) 00:16:29.669 00:16:29.669 Complete histogram 00:16:29.669 ================== 00:16:29.669 Range in us Cumulative Count 00:16:29.669 2.255 - 2.269: 0.0845% ( 9) 00:16:29.669 2.269 - 2.284: 1.2577% ( 125) 00:16:29.669 2.284 - 2.298: 5.3313% ( 434) 00:16:29.669 2.298 - 2.313: 7.7811% ( 261) 00:16:29.669 2.313 - 2.327: 8.7103% ( 99) 00:16:29.669 2.327 - 2.342: 8.9450% ( 25) 00:16:29.669 2.342 - 2.356: 10.8128% ( 199) 00:16:29.669 2.356 - 2.371: 18.7441% ( 845) 00:16:29.669 2.371 - 2.385: 24.9108% ( 657) 00:16:29.669 2.385 - 2.400: 26.6191% ( 182) 00:16:29.669 2.400 - 2.415: 27.1823% ( 60) 00:16:29.669 2.415 - 2.429: 27.9801% ( 85) 00:16:29.669 2.429 - 2.444: 33.2645% ( 563) 00:16:29.669 2.444 - 2.458: 41.2521% ( 851) 00:16:29.669 2.458 - 2.473: 44.8939% ( 388) 00:16:29.669 2.473 - 2.487: 46.2080% ( 140) 00:16:29.669 2.487 - 2.502: 46.7524% ( 58) 00:16:29.669 2.502 - 2.516: 48.8173% ( 220) 00:16:29.669 2.516 - 2.531: 61.3666% ( 1337) 00:16:29.669 2.531 - 2.545: 76.4783% ( 1610) 00:16:29.669 2.545 - 2.560: 82.8703% ( 681) 00:16:29.669 2.560 - 2.575: 85.1136% ( 239) 00:16:29.669 2.575 - 2.589: 86.3432% ( 131) 00:16:29.669 2.589 - 2.604: 87.2161% ( 93) 00:16:29.669 2.604 - 2.618: 88.1359% ( 98) 00:16:29.669 2.618 - 2.633: 89.9381% ( 192) 00:16:29.669 2.633 - 2.647: 91.8059% ( 199) 00:16:29.669 2.647 - 2.662: 92.8008% ( 106) 00:16:29.669 2.662 - 2.676: 93.3828% ( 62) 00:16:29.669 2.676 - 2.691: 93.8615% ( 51) 00:16:29.669 2.691 - 2.705: 94.2181% ( 38) 00:16:29.669 2.705 - 2.720: 94.4903% ( 29) 00:16:29.669 2.720 - 2.735: 94.8095% ( 34) 00:16:29.669 2.735 - 2.749: 94.9878% ( 19) 00:16:29.669 2.749 - 2.764: 95.2131% ( 24) 00:16:29.669 2.764 - 2.778: 95.2788% ( 7) 00:16:29.669 2.778 - 2.793: 95.3539% ( 8) 00:16:29.669 2.793 - 2.807: 95.4196% ( 7) 00:16:29.669 2.807 - 2.822: 95.5134% ( 10) 00:16:29.669 2.822 - 2.836: 95.5510% ( 4) 00:16:29.669 2.836 - 2.851: 95.6354% ( 9) 00:16:29.669 2.851 - 2.865: 95.6824% ( 5) 00:16:29.669 2.865 - 2.880: 95.7011% ( 2) 00:16:29.669 2.880 - 2.895: 95.7668% ( 7) 00:16:29.669 2.895 - 2.909: 95.7950% ( 3) 00:16:29.669 2.909 - 2.924: 95.8419% ( 5) 00:16:29.669 2.924 - 2.938: 96.0297% ( 20) 00:16:29.669 2.938 - 2.953: 96.2549% ( 24) 00:16:29.669 2.953 - 2.967: 96.6304% ( 40) 00:16:29.669 2.967 - 2.982: 96.9401% ( 33) 00:16:29.669 2.982 - 2.996: 97.1560% ( 23) 00:16:29.669 2.996 - 3.011: 97.2592% ( 11) 00:16:29.669 3.011 - 3.025: 97.3343% ( 8) 00:16:29.669 3.025 - 3.040: 97.4000% ( 7) 00:16:29.669 3.040 - 3.055: 97.4376% ( 4) 00:16:29.669 3.055 - 3.069: 97.5033% ( 7) 00:16:29.669 3.069 - 3.084: 97.5596% ( 6) 00:16:29.669 3.084 - 3.098: 97.5690% ( 1) 00:16:29.669 3.098 - 3.113: 97.5878% ( 2) 00:16:29.669 3.113 - 3.127: 97.5971% ( 1) 00:16:29.669 3.127 - 3.142: 97.6253% ( 3) 00:16:29.669 3.142 - 3.156: 97.6441% ( 2) 00:16:29.669 3.156 - 3.171: 97.6535% ( 1) 00:16:29.669 3.171 - 3.185: 97.7098% ( 6) 00:16:29.669 3.200 - 3.215: 97.7286% ( 2) 00:16:29.669 3.215 - 3.229: 97.7379% ( 1) 00:16:29.669 3.244 - 3.258: 97.7473% ( 1) 00:16:29.669 3.258 - 3.273: 97.7567% ( 1) 00:16:29.669 3.287 - 3.302: 97.7661% ( 1) 00:16:29.669 3.302 - 3.316: 97.7755% ( 1) 00:16:29.669 3.316 - 3.331: 97.8036% ( 3) 00:16:29.669 3.360 - 3.375: 97.8130% ( 1) 00:16:29.669 3.418 - 3.433: 97.8224% ( 1) 00:16:29.669 3.462 - 3.476: 97.8506% ( 3) 00:16:29.669 3.476 - 3.491: 97.8600% ( 1) 00:16:29.669 3.578 - 3.593: 97.8693% ( 1) 00:16:29.669 3.593 - 3.607: 97.8787% ( 1) 00:16:29.669 3.607 - 3.622: 97.8881% ( 1) 00:16:29.669 3.680 - 3.695: 97.9069% ( 2) 00:16:29.669 3.753 - 3.782: 97.9257% ( 2) 00:16:29.669 3.782 - 3.811: 97.9350% ( 1) 00:16:29.669 3.811 - 3.840: 97.9444% ( 1) 00:16:29.669 3.869 - 3.898: 97.9538% ( 1) 00:16:29.669 3.956 - 3.985: 97.9726% ( 2) 00:16:29.669 3.985 - 4.015: 97.9914% ( 2) 00:16:29.669 4.015 - 4.044: 98.0008% ( 1) 00:16:29.669 4.073 - 4.102: 98.0101% ( 1) 00:16:29.669 4.160 - 4.189: 98.0195% ( 1) 00:16:29.669 4.305 - 4.335: 98.0383% ( 2) 00:16:29.669 4.393 - 4.422: 98.0477% ( 1) 00:16:29.669 4.422 - 4.451: 98.0758% ( 3) 00:16:29.669 4.451 - 4.480: 98.0852% ( 1) 00:16:29.669 4.480 - 4.509: 98.1040% ( 2) 00:16:29.669 4.509 - 4.538: 98.1134% ( 1) 00:16:29.669 4.596 - 4.625: 98.1322% ( 2) 00:16:29.669 4.625 - 4.655: 98.1415% ( 1) 00:16:29.669 4.655 - 4.684: 98.1509% ( 1) 00:16:29.670 4.684 - 4.713: 98.1603% ( 1) 00:16:29.670 4.713 - 4.742: 98.1791% ( 2) 00:16:29.670 4.742 - 4.771: 98.1979% ( 2) 00:16:29.670 4.829 - 4.858: 98.2072% ( 1) 00:16:29.670 4.916 - 4.945: 98.2260% ( 2) 00:16:29.670 4.975 - 5.004: 98.2354% ( 1) 00:16:29.670 5.033 - 5.062: 98.2448% ( 1) 00:16:29.670 5.062 - 5.091: 98.2542% ( 1) 00:16:29.670 5.120 - 5.149: 98.2636% ( 1) 00:16:29.670 5.265 - 5.295: 98.2917% ( 3) 00:16:29.670 5.295 - 5.324: 98.3011% ( 1) 00:16:29.670 5.324 - 5.353: 98.3105% ( 1) 00:16:29.670 5.498 - 5.527: 98.3199% ( 1) 00:16:29.670 5.760 - 5.789: 98.3293% ( 1) 00:16:29.670 5.876 - 5.905: 98.3387% ( 1) 00:16:29.670 6.080 - 6.109: 98.3480% ( 1) 00:16:29.670 6.138 - 6.167: 98.3574% ( 1) 00:16:29.670 6.196 - 6.225: 98.3668% ( 1) 00:16:29.670 6.342 - 6.371: 98.3762% ( 1) 00:16:29.670 6.545 - 6.575: 98.3856% ( 1) 00:16:29.670 6.575 - 6.604: 98.3950% ( 1) 00:16:29.670 6.604 - 6.633: 98.4044% ( 1) 00:16:29.670 6.778 - 6.807: 98.4137% ( 1) 00:16:29.670 6.865 - 6.895: 98.4231% ( 1) 00:16:29.670 6.982 - 7.011: 98.4325% ( 1) 00:16:29.670 7.127 - 7.156: 98.4419% ( 1) 00:16:29.670 8.262 - 8.320: 98.4513% ( 1) 00:16:29.670 8.320 - 8.378: 98.4607% ( 1) 00:16:29.670 8.495 - 8.553: 98.4701% ( 1) 00:16:29.670 8.844 - 8.902: 98.4794% ( 1) 00:16:29.670 8.902 - 8.960: 98.4888% ( 1) 00:16:29.670 8.960 - 9.018: 98.4982% ( 1) 00:16:29.670 9.367 - 9.425: 98.5076% ( 1) 00:16:29.670 9.484 - 9.542: 98.5170% ( 1) 00:16:29.670 9.600 - 9.658: 98.5264% ( 1) 00:16:29.670 9.658 - 9.716: 98.5451% ( 2) 00:16:29.670 9.891 - 9.949: 98.5545% ( 1) 00:16:29.670 10.589 - 10.647: 98.5639% ( 1) 00:16:29.670 10.647 - 10.705: 98.5733% ( 1) 00:16:29.670 10.822 - 10.880: 98.5827% ( 1) 00:16:29.670 11.055 - 11.113: 98.5921% ( 1) 00:16:29.670 11.171 - 11.229: 98.6015% ( 1) 00:16:29.670 11.578 - 11.636: 98.6109% ( 1) 00:16:29.670 11.636 - 11.695: 98.6202% ( 1) 00:16:29.670 11.695 - 11.753: 98.6296% ( 1) 00:16:29.670 12.335 - 12.393: 98.6390% ( 1) 00:16:29.670 12.393 - 12.451: 98.6484% ( 1) 00:16:29.670 12.451 - 12.509: 98.6578% ( 1) 00:16:29.670 12.625 - 12.684: 98.6672% ( 1) 00:16:29.670 12.975 - 13.033: 98.6766% ( 1) 00:16:29.670 13.091 - 13.149: 98.6859% ( 1) 00:16:29.670 13.673 - 13.731: 98.6953% ( 1) 00:16:29.670 14.138 - 14.196: 98.7047% ( 1) 00:16:29.670 14.487 - 14.545: 98.7141% ( 1) 00:16:29.670 16.407 - 16.524: 98.7235% ( 1) 00:16:29.670 16.640 - 16.756: 98.7423% ( 2) 00:16:29.670 16.756 - 16.873: 98.7610% ( 2) 00:16:29.670 16.989 - 17.105: 98.7798% ( 2) 00:16:29.670 17.105 - 17.222: 98.8267% ( [2024-07-12 00:35:34.287730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.670 5) 00:16:29.670 17.222 - 17.338: 98.8549% ( 3) 00:16:29.670 17.338 - 17.455: 98.8643% ( 1) 00:16:29.670 17.455 - 17.571: 98.9018% ( 4) 00:16:29.670 17.571 - 17.687: 98.9581% ( 6) 00:16:29.670 17.687 - 17.804: 98.9957% ( 4) 00:16:29.670 17.804 - 17.920: 99.0051% ( 1) 00:16:29.670 17.920 - 18.036: 99.0238% ( 2) 00:16:29.670 18.036 - 18.153: 99.0426% ( 2) 00:16:29.670 18.385 - 18.502: 99.0520% ( 1) 00:16:29.670 18.502 - 18.618: 99.0802% ( 3) 00:16:29.670 18.735 - 18.851: 99.0989% ( 2) 00:16:29.670 18.851 - 18.967: 99.1083% ( 1) 00:16:29.670 18.967 - 19.084: 99.1177% ( 1) 00:16:29.670 19.084 - 19.200: 99.1365% ( 2) 00:16:29.670 19.549 - 19.665: 99.1459% ( 1) 00:16:29.670 19.665 - 19.782: 99.1646% ( 2) 00:16:29.670 20.945 - 21.062: 99.1740% ( 1) 00:16:29.670 21.062 - 21.178: 99.1834% ( 1) 00:16:29.670 21.993 - 22.109: 99.1928% ( 1) 00:16:29.670 22.225 - 22.342: 99.2116% ( 2) 00:16:29.670 24.087 - 24.204: 99.2209% ( 1) 00:16:29.670 28.975 - 29.091: 99.2303% ( 1) 00:16:29.670 3038.487 - 3053.382: 99.2585% ( 3) 00:16:29.670 3053.382 - 3068.276: 99.2773% ( 2) 00:16:29.670 3902.371 - 3932.160: 99.2867% ( 1) 00:16:29.670 3932.160 - 3961.949: 99.3054% ( 2) 00:16:29.670 3961.949 - 3991.738: 99.4462% ( 15) 00:16:29.670 3991.738 - 4021.527: 99.8217% ( 40) 00:16:29.670 4021.527 - 4051.316: 99.9531% ( 14) 00:16:29.670 4051.316 - 4081.105: 99.9718% ( 2) 00:16:29.670 4081.105 - 4110.895: 99.9812% ( 1) 00:16:29.670 6047.185 - 6076.975: 99.9906% ( 1) 00:16:29.670 7030.225 - 7060.015: 100.0000% ( 1) 00:16:29.670 00:16:29.670 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:29.670 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:29.670 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:29.670 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:29.670 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.929 [ 00:16:29.929 { 00:16:29.929 "allow_any_host": true, 00:16:29.929 "hosts": [], 00:16:29.929 "listen_addresses": [], 00:16:29.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.929 "subtype": "Discovery" 00:16:29.929 }, 00:16:29.929 { 00:16:29.929 "allow_any_host": true, 00:16:29.929 "hosts": [], 00:16:29.929 "listen_addresses": [ 00:16:29.929 { 00:16:29.929 "adrfam": "IPv4", 00:16:29.929 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.929 "trsvcid": "0", 00:16:29.929 "trtype": "VFIOUSER" 00:16:29.929 } 00:16:29.929 ], 00:16:29.929 "max_cntlid": 65519, 00:16:29.929 "max_namespaces": 32, 00:16:29.929 "min_cntlid": 1, 00:16:29.929 "model_number": "SPDK bdev Controller", 00:16:29.929 "namespaces": [ 00:16:29.929 { 00:16:29.929 "bdev_name": "Malloc1", 00:16:29.929 "name": "Malloc1", 00:16:29.929 "nguid": "75F35B9488CC448E9A52DD456E81B62E", 00:16:29.929 "nsid": 1, 00:16:29.929 "uuid": "75f35b94-88cc-448e-9a52-dd456e81b62e" 00:16:29.929 }, 00:16:29.929 { 00:16:29.929 "bdev_name": "Malloc3", 00:16:29.929 "name": "Malloc3", 00:16:29.929 "nguid": "AE7D105B7E8C4101BB8B5752F27227FC", 00:16:29.929 "nsid": 2, 00:16:29.929 "uuid": "ae7d105b-7e8c-4101-bb8b-5752f27227fc" 00:16:29.929 } 00:16:29.929 ], 00:16:29.929 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.929 "serial_number": "SPDK1", 00:16:29.929 "subtype": "NVMe" 00:16:29.929 }, 00:16:29.929 { 00:16:29.929 "allow_any_host": true, 00:16:29.929 "hosts": [], 00:16:29.929 "listen_addresses": [ 00:16:29.929 { 00:16:29.929 "adrfam": "IPv4", 00:16:29.929 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.929 "trsvcid": "0", 00:16:29.929 "trtype": "VFIOUSER" 00:16:29.929 } 00:16:29.929 ], 00:16:29.929 "max_cntlid": 65519, 00:16:29.929 "max_namespaces": 32, 00:16:29.929 "min_cntlid": 1, 00:16:29.929 "model_number": "SPDK bdev Controller", 00:16:29.929 "namespaces": [ 00:16:29.929 { 00:16:29.929 "bdev_name": "Malloc2", 00:16:29.929 "name": "Malloc2", 00:16:29.929 "nguid": "CC3EDBDACB26404D8441767B4EFBFAC2", 00:16:29.929 "nsid": 1, 00:16:29.929 "uuid": "cc3edbda-cb26-404d-8441-767b4efbfac2" 00:16:29.929 } 00:16:29.929 ], 00:16:29.929 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.929 "serial_number": "SPDK2", 00:16:29.929 "subtype": "NVMe" 00:16:29.929 } 00:16:29.929 ] 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=79136 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.929 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:29.930 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:16:29.930 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:30.188 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:30.188 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:16:30.188 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=3 00:16:30.188 00:35:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:30.188 [2024-07-12 00:35:34.980222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:30.188 00:35:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:30.188 00:35:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:30.188 00:35:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:30.188 00:35:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:30.188 00:35:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:30.755 Malloc4 00:16:30.755 00:35:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:31.013 [2024-07-12 00:35:35.927841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.013 00:35:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.272 Asynchronous Event Request test 00:16:31.272 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.272 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.272 Registering asynchronous event callbacks... 00:16:31.272 Starting namespace attribute notice tests for all controllers... 00:16:31.272 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:31.272 aer_cb - Changed Namespace 00:16:31.272 Cleaning up... 00:16:31.272 [ 00:16:31.272 { 00:16:31.272 "allow_any_host": true, 00:16:31.272 "hosts": [], 00:16:31.272 "listen_addresses": [], 00:16:31.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.272 "subtype": "Discovery" 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "allow_any_host": true, 00:16:31.272 "hosts": [], 00:16:31.272 "listen_addresses": [ 00:16:31.272 { 00:16:31.272 "adrfam": "IPv4", 00:16:31.272 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.272 "trsvcid": "0", 00:16:31.272 "trtype": "VFIOUSER" 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "max_cntlid": 65519, 00:16:31.272 "max_namespaces": 32, 00:16:31.272 "min_cntlid": 1, 00:16:31.272 "model_number": "SPDK bdev Controller", 00:16:31.272 "namespaces": [ 00:16:31.272 { 00:16:31.272 "bdev_name": "Malloc1", 00:16:31.272 "name": "Malloc1", 00:16:31.272 "nguid": "75F35B9488CC448E9A52DD456E81B62E", 00:16:31.272 "nsid": 1, 00:16:31.272 "uuid": "75f35b94-88cc-448e-9a52-dd456e81b62e" 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "bdev_name": "Malloc3", 00:16:31.272 "name": "Malloc3", 00:16:31.272 "nguid": "AE7D105B7E8C4101BB8B5752F27227FC", 00:16:31.272 "nsid": 2, 00:16:31.272 "uuid": "ae7d105b-7e8c-4101-bb8b-5752f27227fc" 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.272 "serial_number": "SPDK1", 00:16:31.272 "subtype": "NVMe" 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "allow_any_host": true, 00:16:31.272 "hosts": [], 00:16:31.272 "listen_addresses": [ 00:16:31.272 { 00:16:31.272 "adrfam": "IPv4", 00:16:31.272 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.272 "trsvcid": "0", 00:16:31.272 "trtype": "VFIOUSER" 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "max_cntlid": 65519, 00:16:31.272 "max_namespaces": 32, 00:16:31.272 "min_cntlid": 1, 00:16:31.272 "model_number": "SPDK bdev Controller", 00:16:31.272 "namespaces": [ 00:16:31.272 { 00:16:31.272 "bdev_name": "Malloc2", 00:16:31.272 "name": "Malloc2", 00:16:31.272 "nguid": "CC3EDBDACB26404D8441767B4EFBFAC2", 00:16:31.272 "nsid": 1, 00:16:31.272 "uuid": "cc3edbda-cb26-404d-8441-767b4efbfac2" 00:16:31.272 }, 00:16:31.272 { 00:16:31.272 "bdev_name": "Malloc4", 00:16:31.272 "name": "Malloc4", 00:16:31.272 "nguid": "6A3F7F5BDE1C445C9561899687DF4A3B", 00:16:31.272 "nsid": 2, 00:16:31.272 "uuid": "6a3f7f5b-de1c-445c-9561-899687df4a3b" 00:16:31.272 } 00:16:31.272 ], 00:16:31.272 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.272 "serial_number": "SPDK2", 00:16:31.272 "subtype": "NVMe" 00:16:31.272 } 00:16:31.272 ] 00:16:31.272 00:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 79136 00:16:31.272 00:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:31.272 00:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 78417 00:16:31.272 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 78417 ']' 00:16:31.272 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 78417 00:16:31.273 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:31.273 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.273 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78417 00:16:31.531 killing process with pid 78417 00:16:31.531 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.531 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.531 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78417' 00:16:31.531 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 78417 00:16:31.531 00:35:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 78417 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=79203 00:16:33.433 Process pid: 79203 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 79203' 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 79203 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 79203 ']' 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.433 00:35:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:33.433 [2024-07-12 00:35:38.348684] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:33.433 [2024-07-12 00:35:38.351914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:33.433 [2024-07-12 00:35:38.352082] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.692 [2024-07-12 00:35:38.530773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.951 [2024-07-12 00:35:38.773597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.951 [2024-07-12 00:35:38.773676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.951 [2024-07-12 00:35:38.773711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.951 [2024-07-12 00:35:38.773724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.951 [2024-07-12 00:35:38.773751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.951 [2024-07-12 00:35:38.774167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.951 [2024-07-12 00:35:38.774316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.951 [2024-07-12 00:35:38.774420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.951 [2024-07-12 00:35:38.774858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.210 [2024-07-12 00:35:39.107575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:34.210 [2024-07-12 00:35:39.110359] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:34.210 [2024-07-12 00:35:39.110914] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:34.210 [2024-07-12 00:35:39.111448] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:34.210 [2024-07-12 00:35:39.111967] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:34.468 00:35:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.468 00:35:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:34.468 00:35:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:35.402 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:35.659 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:35.659 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:35.659 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.659 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:35.659 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:36.226 Malloc1 00:16:36.226 00:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:36.226 00:35:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:36.484 00:35:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:37.050 00:35:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:37.050 00:35:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:37.050 00:35:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:37.308 Malloc2 00:16:37.308 00:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:37.565 00:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:37.823 00:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 79203 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 79203 ']' 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 79203 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79203 00:16:38.081 killing process with pid 79203 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79203' 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 79203 00:16:38.081 00:35:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 79203 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:39.977 ************************************ 00:16:39.977 END TEST nvmf_vfio_user 00:16:39.977 ************************************ 00:16:39.977 00:16:39.977 real 1m1.807s 00:16:39.977 user 3m55.901s 00:16:39.977 sys 0m5.988s 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:39.977 00:35:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.977 00:35:44 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:39.977 00:35:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.977 00:35:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.977 00:35:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.977 ************************************ 00:16:39.977 START TEST nvmf_vfio_user_nvme_compliance 00:16:39.977 ************************************ 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:39.977 * Looking for test storage... 00:16:39.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.977 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=79415 00:16:39.978 Process pid: 79415 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 79415' 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 79415 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 79415 ']' 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.978 00:35:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.978 [2024-07-12 00:35:44.682088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:39.978 [2024-07-12 00:35:44.682287] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.978 [2024-07-12 00:35:44.863652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.236 [2024-07-12 00:35:45.148631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.236 [2024-07-12 00:35:45.148731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.236 [2024-07-12 00:35:45.148747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.236 [2024-07-12 00:35:45.148761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.236 [2024-07-12 00:35:45.148771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.236 [2024-07-12 00:35:45.149940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.236 [2024-07-12 00:35:45.150088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.236 [2024-07-12 00:35:45.150091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.801 00:35:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.801 00:35:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:40.801 00:35:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.749 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.006 malloc0 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.006 00:35:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:42.264 00:16:42.264 00:16:42.264 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.264 http://cunit.sourceforge.net/ 00:16:42.264 00:16:42.264 00:16:42.264 Suite: nvme_compliance 00:16:42.264 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 00:35:47.030177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.264 [2024-07-12 00:35:47.031958] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:42.264 [2024-07-12 00:35:47.032058] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:42.264 [2024-07-12 00:35:47.032082] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:42.264 [2024-07-12 00:35:47.033207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.264 passed 00:16:42.264 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 00:35:47.161351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.264 [2024-07-12 00:35:47.164439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.522 passed 00:16:42.522 Test: admin_identify_ns ...[2024-07-12 00:35:47.291689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.522 [2024-07-12 00:35:47.352441] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:42.522 [2024-07-12 00:35:47.360440] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:42.522 [2024-07-12 00:35:47.381691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.522 passed 00:16:42.779 Test: admin_get_features_mandatory_features ...[2024-07-12 00:35:47.506603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.779 [2024-07-12 00:35:47.509622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.779 passed 00:16:42.779 Test: admin_get_features_optional_features ...[2024-07-12 00:35:47.637611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.779 [2024-07-12 00:35:47.640643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.779 passed 00:16:43.036 Test: admin_set_features_number_of_queues ...[2024-07-12 00:35:47.765896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.036 [2024-07-12 00:35:47.869357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.036 passed 00:16:43.295 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 00:35:47.996161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.295 [2024-07-12 00:35:48.001218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.295 passed 00:16:43.295 Test: admin_get_log_page_with_lpo ...[2024-07-12 00:35:48.123719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.295 [2024-07-12 00:35:48.191441] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:43.295 [2024-07-12 00:35:48.204525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.554 passed 00:16:43.554 Test: fabric_property_get ...[2024-07-12 00:35:48.320807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.554 [2024-07-12 00:35:48.322243] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:43.554 [2024-07-12 00:35:48.323834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.554 passed 00:16:43.554 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 00:35:48.446717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.554 [2024-07-12 00:35:48.448266] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:43.554 [2024-07-12 00:35:48.449747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.811 passed 00:16:43.811 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 00:35:48.571625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.811 [2024-07-12 00:35:48.653431] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:43.811 [2024-07-12 00:35:48.669448] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:43.811 [2024-07-12 00:35:48.675524] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.069 passed 00:16:44.069 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 00:35:48.802318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.069 [2024-07-12 00:35:48.803860] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:44.069 [2024-07-12 00:35:48.805339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.069 passed 00:16:44.069 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 00:35:48.976875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.335 [2024-07-12 00:35:49.054448] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:44.335 [2024-07-12 00:35:49.078422] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:44.335 [2024-07-12 00:35:49.084537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.335 passed 00:16:44.335 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 00:35:49.236232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.335 [2024-07-12 00:35:49.237977] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:44.335 [2024-07-12 00:35:49.238085] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:44.335 [2024-07-12 00:35:49.240274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.593 passed 00:16:44.593 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 00:35:49.378039] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.593 [2024-07-12 00:35:49.471504] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:44.593 [2024-07-12 00:35:49.479502] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:44.593 [2024-07-12 00:35:49.487484] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:44.593 [2024-07-12 00:35:49.495464] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:44.593 [2024-07-12 00:35:49.523382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.851 passed 00:16:44.851 Test: admin_create_io_sq_verify_pc ...[2024-07-12 00:35:49.652274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.851 [2024-07-12 00:35:49.668457] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:44.851 [2024-07-12 00:35:49.692053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.851 passed 00:16:45.109 Test: admin_create_io_qp_max_qps ...[2024-07-12 00:35:49.820181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.064 [2024-07-12 00:35:50.941433] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:46.630 [2024-07-12 00:35:51.371306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.630 passed 00:16:46.630 Test: admin_create_io_sq_shared_cq ...[2024-07-12 00:35:51.500595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:46.888 [2024-07-12 00:35:51.633420] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:46.889 [2024-07-12 00:35:51.670634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:46.889 passed 00:16:46.889 00:16:46.889 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.889 suites 1 1 n/a 0 0 00:16:46.889 tests 18 18 18 0 0 00:16:46.889 asserts 360 360 360 0 n/a 00:16:46.889 00:16:46.889 Elapsed time = 2.022 seconds 00:16:46.889 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 79415 00:16:46.889 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 79415 ']' 00:16:46.889 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 79415 00:16:46.889 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79415 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79415' 00:16:47.147 killing process with pid 79415 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 79415 00:16:47.147 00:35:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 79415 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:48.526 00:16:48.526 real 0m8.797s 00:16:48.526 user 0m23.718s 00:16:48.526 sys 0m0.794s 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 ************************************ 00:16:48.526 END TEST nvmf_vfio_user_nvme_compliance 00:16:48.526 ************************************ 00:16:48.526 00:35:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:48.526 00:35:53 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:48.526 00:35:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:48.526 00:35:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.526 00:35:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 ************************************ 00:16:48.526 START TEST nvmf_vfio_user_fuzz 00:16:48.526 ************************************ 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:48.526 * Looking for test storage... 00:16:48.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.526 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:48.527 Process pid: 79590 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=79590 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 79590' 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 79590 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 79590 ']' 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.527 00:35:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.900 00:35:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.900 00:35:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:49.900 00:35:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.834 malloc0 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:50.834 00:35:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:51.766 Shutting down the fuzz application 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 79590 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 79590 ']' 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 79590 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79590 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79590' 00:16:51.766 killing process with pid 79590 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 79590 00:16:51.766 00:35:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 79590 00:16:53.143 00:35:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:53.143 00:35:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:53.143 00:16:53.143 real 0m4.669s 00:16:53.143 user 0m5.296s 00:16:53.143 sys 0m0.638s 00:16:53.143 00:35:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.143 00:35:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 ************************************ 00:16:53.143 END TEST nvmf_vfio_user_fuzz 00:16:53.143 ************************************ 00:16:53.143 00:35:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.143 00:35:58 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.143 00:35:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.143 00:35:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.143 00:35:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 ************************************ 00:16:53.143 START TEST nvmf_host_management 00:16:53.143 ************************************ 00:16:53.143 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.401 * Looking for test storage... 00:16:53.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.401 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:53.402 Cannot find device "nvmf_tgt_br" 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.402 Cannot find device "nvmf_tgt_br2" 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:53.402 Cannot find device "nvmf_tgt_br" 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:53.402 Cannot find device "nvmf_tgt_br2" 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:53.402 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:53.660 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:53.660 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:53.660 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.660 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:53.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:53.661 00:16:53.661 --- 10.0.0.2 ping statistics --- 00:16:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.661 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:53.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:53.661 00:16:53.661 --- 10.0.0.3 ping statistics --- 00:16:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.661 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:16:53.661 00:16:53.661 --- 10.0.0.1 ping statistics --- 00:16:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.661 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=79841 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 79841 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 79841 ']' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.661 00:35:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 [2024-07-12 00:35:58.606655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:53.919 [2024-07-12 00:35:58.606853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.919 [2024-07-12 00:35:58.774742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.178 [2024-07-12 00:35:59.032338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.178 [2024-07-12 00:35:59.032432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.178 [2024-07-12 00:35:59.032467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.178 [2024-07-12 00:35:59.032483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.178 [2024-07-12 00:35:59.032496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.178 [2024-07-12 00:35:59.032954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.178 [2024-07-12 00:35:59.033515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.178 [2024-07-12 00:35:59.033655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.178 [2024-07-12 00:35:59.033655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.745 [2024-07-12 00:35:59.631451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.745 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.003 Malloc0 00:16:55.003 [2024-07-12 00:35:59.757065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=79913 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 79913 /var/tmp/bdevperf.sock 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 79913 ']' 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.003 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:55.004 { 00:16:55.004 "params": { 00:16:55.004 "name": "Nvme$subsystem", 00:16:55.004 "trtype": "$TEST_TRANSPORT", 00:16:55.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.004 "adrfam": "ipv4", 00:16:55.004 "trsvcid": "$NVMF_PORT", 00:16:55.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.004 "hdgst": ${hdgst:-false}, 00:16:55.004 "ddgst": ${ddgst:-false} 00:16:55.004 }, 00:16:55.004 "method": "bdev_nvme_attach_controller" 00:16:55.004 } 00:16:55.004 EOF 00:16:55.004 )") 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:55.004 00:35:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:55.004 "params": { 00:16:55.004 "name": "Nvme0", 00:16:55.004 "trtype": "tcp", 00:16:55.004 "traddr": "10.0.0.2", 00:16:55.004 "adrfam": "ipv4", 00:16:55.004 "trsvcid": "4420", 00:16:55.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:55.004 "hdgst": false, 00:16:55.004 "ddgst": false 00:16:55.004 }, 00:16:55.004 "method": "bdev_nvme_attach_controller" 00:16:55.004 }' 00:16:55.004 [2024-07-12 00:35:59.925630] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:55.004 [2024-07-12 00:35:59.925842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79913 ] 00:16:55.263 [2024-07-12 00:36:00.105683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.525 [2024-07-12 00:36:00.376681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.091 Running I/O for 10 seconds... 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:56.091 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.092 00:36:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.354 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.354 [2024-07-12 00:36:01.034592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.034983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 [2024-07-12 00:36:01.035305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.354 [2024-07-12 00:36:01.035328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.354 task offset: 37632 on job bdev=Nvme0n1 fails 00:16:56.354 00:16:56.354 Latency(us) 00:16:56.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.354 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.354 Job: Nvme0n1 ended in about 0.24 seconds with error 00:16:56.354 Verification LBA range: start 0x0 length 0x400 00:16:56.354 Nvme0n1 : 0.24 1069.68 66.86 267.42 0.00 45312.70 4110.89 42419.67 00:16:56.354 =================================================================================================================== 00:16:56.354 Total : 1069.68 66.86 267.42 0.00 45312.70 4110.89 42419.67 00:16:56.355 [2024-07-12 00:36:01.035349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.035965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.035988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.036978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.036995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.037016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.037053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.037070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.037090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.355 [2024-07-12 00:36:01.037107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.355 [2024-07-12 00:36:01.037127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:56.356 [2024-07-12 00:36:01.037336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037664] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:16:56.356 [2024-07-12 00:36:01.037825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.356 [2024-07-12 00:36:01.037854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.356 [2024-07-12 00:36:01.037896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.356 [2024-07-12 00:36:01.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.356 [2024-07-12 00:36:01.037964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.037981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:16:56.356 [2024-07-12 00:36:01.039288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.356 [2024-07-12 00:36:01.044738] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:56.356 [2024-07-12 00:36:01.044820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 [2024-07-12 00:36:01.048502] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:56.356 [2024-07-12 00:36:01.048698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:56.356 [2024-07-12 00:36:01.048747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.356 [2024-07-12 00:36:01.048780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:56.356 [2024-07-12 00:36:01.048800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:56.356 [2024-07-12 00:36:01.048830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:56.356 [2024-07-12 00:36:01.048847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500002ad80 00:16:56.356 [2024-07-12 00:36:01.048930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:56.356 [2024-07-12 00:36:01.048998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:56.356 [2024-07-12 00:36:01.049022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:56.356 [2024-07-12 00:36:01.049048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:56.356 [2024-07-12 00:36:01.049089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.356 00:36:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 79913 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.297 { 00:16:57.297 "params": { 00:16:57.297 "name": "Nvme$subsystem", 00:16:57.297 "trtype": "$TEST_TRANSPORT", 00:16:57.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.297 "adrfam": "ipv4", 00:16:57.297 "trsvcid": "$NVMF_PORT", 00:16:57.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.297 "hdgst": ${hdgst:-false}, 00:16:57.297 "ddgst": ${ddgst:-false} 00:16:57.297 }, 00:16:57.297 "method": "bdev_nvme_attach_controller" 00:16:57.297 } 00:16:57.297 EOF 00:16:57.297 )") 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:57.297 00:36:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.297 "params": { 00:16:57.297 "name": "Nvme0", 00:16:57.297 "trtype": "tcp", 00:16:57.297 "traddr": "10.0.0.2", 00:16:57.297 "adrfam": "ipv4", 00:16:57.297 "trsvcid": "4420", 00:16:57.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:57.297 "hdgst": false, 00:16:57.297 "ddgst": false 00:16:57.297 }, 00:16:57.297 "method": "bdev_nvme_attach_controller" 00:16:57.297 }' 00:16:57.297 [2024-07-12 00:36:02.178194] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:57.297 [2024-07-12 00:36:02.178387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79963 ] 00:16:57.555 [2024-07-12 00:36:02.358574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.813 [2024-07-12 00:36:02.653963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.379 Running I/O for 1 seconds... 00:16:59.314 00:16:59.314 Latency(us) 00:16:59.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.314 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:59.314 Verification LBA range: start 0x0 length 0x400 00:16:59.314 Nvme0n1 : 1.04 1291.29 80.71 0.00 0.00 48628.76 9234.62 42896.29 00:16:59.314 =================================================================================================================== 00:16:59.314 Total : 1291.29 80.71 0.00 0.00 48628.76 9234.62 42896.29 00:17:00.718 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 79913 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.718 rmmod nvme_tcp 00:17:00.718 rmmod nvme_fabrics 00:17:00.718 rmmod nvme_keyring 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 79841 ']' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 79841 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 79841 ']' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 79841 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79841 00:17:00.718 killing process with pid 79841 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79841' 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 79841 00:17:00.718 00:36:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 79841 00:17:02.093 [2024-07-12 00:36:06.832615] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:02.093 00:17:02.093 real 0m8.935s 00:17:02.093 user 0m35.543s 00:17:02.093 sys 0m1.776s 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.093 00:36:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.093 ************************************ 00:17:02.093 END TEST nvmf_host_management 00:17:02.093 ************************************ 00:17:02.093 00:36:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.093 00:36:07 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.093 00:36:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.093 00:36:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.093 00:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.093 ************************************ 00:17:02.093 START TEST nvmf_lvol 00:17:02.093 ************************************ 00:17:02.093 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:02.352 * Looking for test storage... 00:17:02.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.352 00:36:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:02.353 Cannot find device "nvmf_tgt_br" 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.353 Cannot find device "nvmf_tgt_br2" 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:02.353 Cannot find device "nvmf_tgt_br" 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:02.353 Cannot find device "nvmf_tgt_br2" 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.353 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:02.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:02.612 00:17:02.612 --- 10.0.0.2 ping statistics --- 00:17:02.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.612 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:02.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:17:02.612 00:17:02.612 --- 10.0.0.3 ping statistics --- 00:17:02.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.612 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:02.612 00:17:02.612 --- 10.0.0.1 ping statistics --- 00:17:02.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.612 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:02.612 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=80205 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 80205 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 80205 ']' 00:17:02.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.613 00:36:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:02.871 [2024-07-12 00:36:07.565004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:02.871 [2024-07-12 00:36:07.565175] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.871 [2024-07-12 00:36:07.739944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:03.130 [2024-07-12 00:36:08.035080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.130 [2024-07-12 00:36:08.035162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.130 [2024-07-12 00:36:08.035181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.130 [2024-07-12 00:36:08.035196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.130 [2024-07-12 00:36:08.035209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.130 [2024-07-12 00:36:08.035480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.130 [2024-07-12 00:36:08.035536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.130 [2024-07-12 00:36:08.035551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.697 00:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.697 00:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:03.697 00:36:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.697 00:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.697 00:36:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:03.955 00:36:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.955 00:36:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:04.215 [2024-07-12 00:36:08.935995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.215 00:36:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:04.474 00:36:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:04.474 00:36:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.041 00:36:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:05.041 00:36:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:05.299 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:05.557 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=939ce3af-02a0-4e70-b285-70a31cd200ae 00:17:05.557 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 939ce3af-02a0-4e70-b285-70a31cd200ae lvol 20 00:17:05.816 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=69666ea3-6537-4f75-b40f-e5c931bb37d2 00:17:05.816 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:06.074 00:36:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69666ea3-6537-4f75-b40f-e5c931bb37d2 00:17:06.332 00:36:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:06.590 [2024-07-12 00:36:11.478765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.590 00:36:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:07.158 00:36:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=80358 00:17:07.158 00:36:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:07.158 00:36:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:08.091 00:36:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 69666ea3-6537-4f75-b40f-e5c931bb37d2 MY_SNAPSHOT 00:17:08.348 00:36:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=61a57e58-8ebb-46d4-bcbc-40f9815d7949 00:17:08.348 00:36:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 69666ea3-6537-4f75-b40f-e5c931bb37d2 30 00:17:08.607 00:36:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 61a57e58-8ebb-46d4-bcbc-40f9815d7949 MY_CLONE 00:17:09.174 00:36:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=18cd1bba-f036-4fd4-b20e-35b4118f8a18 00:17:09.174 00:36:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 18cd1bba-f036-4fd4-b20e-35b4118f8a18 00:17:09.741 00:36:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 80358 00:17:17.878 Initializing NVMe Controllers 00:17:17.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:17.878 Controller IO queue size 128, less than required. 00:17:17.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:17.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:17.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:17.878 Initialization complete. Launching workers. 00:17:17.879 ======================================================== 00:17:17.879 Latency(us) 00:17:17.879 Device Information : IOPS MiB/s Average min max 00:17:17.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8226.10 32.13 15571.07 587.64 240070.71 00:17:17.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8036.00 31.39 15930.39 5098.79 198695.47 00:17:17.879 ======================================================== 00:17:17.879 Total : 16262.10 63.52 15748.63 587.64 240070.71 00:17:17.879 00:17:17.879 00:36:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:17.879 00:36:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69666ea3-6537-4f75-b40f-e5c931bb37d2 00:17:18.136 00:36:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 939ce3af-02a0-4e70-b285-70a31cd200ae 00:17:18.394 00:36:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:18.394 00:36:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:18.394 00:36:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:18.394 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.394 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.652 rmmod nvme_tcp 00:17:18.652 rmmod nvme_fabrics 00:17:18.652 rmmod nvme_keyring 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 80205 ']' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 80205 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 80205 ']' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 80205 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80205 00:17:18.652 killing process with pid 80205 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80205' 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 80205 00:17:18.652 00:36:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 80205 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.565 ************************************ 00:17:20.565 END TEST nvmf_lvol 00:17:20.565 ************************************ 00:17:20.565 00:17:20.565 real 0m18.036s 00:17:20.565 user 1m12.119s 00:17:20.565 sys 0m3.885s 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 00:36:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:20.565 00:36:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:20.565 00:36:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.565 00:36:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.565 00:36:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 ************************************ 00:17:20.565 START TEST nvmf_lvs_grow 00:17:20.565 ************************************ 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:20.565 * Looking for test storage... 00:17:20.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.565 Cannot find device "nvmf_tgt_br" 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.565 Cannot find device "nvmf_tgt_br2" 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.565 Cannot find device "nvmf_tgt_br" 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.565 Cannot find device "nvmf_tgt_br2" 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.565 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.566 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:20.822 00:17:20.822 --- 10.0.0.2 ping statistics --- 00:17:20.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.822 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:20.822 00:17:20.822 --- 10.0.0.3 ping statistics --- 00:17:20.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.822 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:20.822 00:17:20.822 --- 10.0.0.1 ping statistics --- 00:17:20.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.822 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=80739 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 80739 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 80739 ']' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.822 00:36:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:20.822 [2024-07-12 00:36:25.681106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:20.822 [2024-07-12 00:36:25.681274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.079 [2024-07-12 00:36:25.852391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.335 [2024-07-12 00:36:26.157899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.335 [2024-07-12 00:36:26.157989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.335 [2024-07-12 00:36:26.158011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.335 [2024-07-12 00:36:26.158029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.335 [2024-07-12 00:36:26.158044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.335 [2024-07-12 00:36:26.158107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.899 00:36:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:22.157 [2024-07-12 00:36:26.882658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:22.157 ************************************ 00:17:22.157 START TEST lvs_grow_clean 00:17:22.157 ************************************ 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:22.157 00:36:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.415 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:22.415 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:22.979 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:22.979 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:22.979 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:23.237 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:23.237 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:23.237 00:36:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea lvol 150 00:17:23.516 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=37136ad1-82e1-4368-8076-9b33a1b60be1 00:17:23.516 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:23.516 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:23.774 [2024-07-12 00:36:28.583060] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:23.774 [2024-07-12 00:36:28.583199] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:23.774 true 00:17:23.774 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:23.774 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:24.055 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:24.055 00:36:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.331 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37136ad1-82e1-4368-8076-9b33a1b60be1 00:17:24.588 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:24.847 [2024-07-12 00:36:29.701679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.847 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=80911 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 80911 /var/tmp/bdevperf.sock 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 80911 ']' 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.105 00:36:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:25.363 [2024-07-12 00:36:30.117097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:25.363 [2024-07-12 00:36:30.117300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80911 ] 00:17:25.363 [2024-07-12 00:36:30.292442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.621 [2024-07-12 00:36:30.539435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.187 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.187 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:26.187 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.753 Nvme0n1 00:17:26.753 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:27.013 [ 00:17:27.013 { 00:17:27.013 "aliases": [ 00:17:27.013 "37136ad1-82e1-4368-8076-9b33a1b60be1" 00:17:27.013 ], 00:17:27.013 "assigned_rate_limits": { 00:17:27.013 "r_mbytes_per_sec": 0, 00:17:27.014 "rw_ios_per_sec": 0, 00:17:27.014 "rw_mbytes_per_sec": 0, 00:17:27.014 "w_mbytes_per_sec": 0 00:17:27.014 }, 00:17:27.014 "block_size": 4096, 00:17:27.014 "claimed": false, 00:17:27.014 "driver_specific": { 00:17:27.014 "mp_policy": "active_passive", 00:17:27.014 "nvme": [ 00:17:27.014 { 00:17:27.014 "ctrlr_data": { 00:17:27.014 "ana_reporting": false, 00:17:27.014 "cntlid": 1, 00:17:27.014 "firmware_revision": "24.09", 00:17:27.014 "model_number": "SPDK bdev Controller", 00:17:27.014 "multi_ctrlr": true, 00:17:27.014 "oacs": { 00:17:27.014 "firmware": 0, 00:17:27.014 "format": 0, 00:17:27.014 "ns_manage": 0, 00:17:27.014 "security": 0 00:17:27.014 }, 00:17:27.014 "serial_number": "SPDK0", 00:17:27.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.014 "vendor_id": "0x8086" 00:17:27.014 }, 00:17:27.014 "ns_data": { 00:17:27.014 "can_share": true, 00:17:27.014 "id": 1 00:17:27.014 }, 00:17:27.014 "trid": { 00:17:27.014 "adrfam": "IPv4", 00:17:27.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.014 "traddr": "10.0.0.2", 00:17:27.014 "trsvcid": "4420", 00:17:27.014 "trtype": "TCP" 00:17:27.014 }, 00:17:27.014 "vs": { 00:17:27.014 "nvme_version": "1.3" 00:17:27.014 } 00:17:27.014 } 00:17:27.014 ] 00:17:27.014 }, 00:17:27.014 "memory_domains": [ 00:17:27.014 { 00:17:27.014 "dma_device_id": "system", 00:17:27.014 "dma_device_type": 1 00:17:27.014 } 00:17:27.014 ], 00:17:27.014 "name": "Nvme0n1", 00:17:27.014 "num_blocks": 38912, 00:17:27.014 "product_name": "NVMe disk", 00:17:27.014 "supported_io_types": { 00:17:27.014 "abort": true, 00:17:27.014 "compare": true, 00:17:27.014 "compare_and_write": true, 00:17:27.014 "copy": true, 00:17:27.014 "flush": true, 00:17:27.014 "get_zone_info": false, 00:17:27.014 "nvme_admin": true, 00:17:27.014 "nvme_io": true, 00:17:27.014 "nvme_io_md": false, 00:17:27.014 "nvme_iov_md": false, 00:17:27.014 "read": true, 00:17:27.014 "reset": true, 00:17:27.014 "seek_data": false, 00:17:27.014 "seek_hole": false, 00:17:27.014 "unmap": true, 00:17:27.014 "write": true, 00:17:27.014 "write_zeroes": true, 00:17:27.014 "zcopy": false, 00:17:27.014 "zone_append": false, 00:17:27.014 "zone_management": false 00:17:27.014 }, 00:17:27.014 "uuid": "37136ad1-82e1-4368-8076-9b33a1b60be1", 00:17:27.014 "zoned": false 00:17:27.014 } 00:17:27.014 ] 00:17:27.014 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=80954 00:17:27.014 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:27.014 00:36:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.014 Running I/O for 10 seconds... 00:17:27.949 Latency(us) 00:17:27.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.949 Nvme0n1 : 1.00 6327.00 24.71 0.00 0.00 0.00 0.00 0.00 00:17:27.949 =================================================================================================================== 00:17:27.949 Total : 6327.00 24.71 0.00 0.00 0.00 0.00 0.00 00:17:27.949 00:17:28.885 00:36:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:29.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.144 Nvme0n1 : 2.00 6240.50 24.38 0.00 0.00 0.00 0.00 0.00 00:17:29.144 =================================================================================================================== 00:17:29.144 Total : 6240.50 24.38 0.00 0.00 0.00 0.00 0.00 00:17:29.144 00:17:29.402 true 00:17:29.402 00:36:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:29.402 00:36:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:29.659 00:36:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:29.659 00:36:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:29.659 00:36:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 80954 00:17:30.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.226 Nvme0n1 : 3.00 6323.33 24.70 0.00 0.00 0.00 0.00 0.00 00:17:30.226 =================================================================================================================== 00:17:30.226 Total : 6323.33 24.70 0.00 0.00 0.00 0.00 0.00 00:17:30.226 00:17:31.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.202 Nvme0n1 : 4.00 6362.00 24.85 0.00 0.00 0.00 0.00 0.00 00:17:31.202 =================================================================================================================== 00:17:31.202 Total : 6362.00 24.85 0.00 0.00 0.00 0.00 0.00 00:17:31.202 00:17:32.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.137 Nvme0n1 : 5.00 6297.60 24.60 0.00 0.00 0.00 0.00 0.00 00:17:32.137 =================================================================================================================== 00:17:32.137 Total : 6297.60 24.60 0.00 0.00 0.00 0.00 0.00 00:17:32.137 00:17:33.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.071 Nvme0n1 : 6.00 6215.67 24.28 0.00 0.00 0.00 0.00 0.00 00:17:33.071 =================================================================================================================== 00:17:33.071 Total : 6215.67 24.28 0.00 0.00 0.00 0.00 0.00 00:17:33.071 00:17:34.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.005 Nvme0n1 : 7.00 6198.29 24.21 0.00 0.00 0.00 0.00 0.00 00:17:34.005 =================================================================================================================== 00:17:34.005 Total : 6198.29 24.21 0.00 0.00 0.00 0.00 0.00 00:17:34.005 00:17:34.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.939 Nvme0n1 : 8.00 6234.38 24.35 0.00 0.00 0.00 0.00 0.00 00:17:34.939 =================================================================================================================== 00:17:34.939 Total : 6234.38 24.35 0.00 0.00 0.00 0.00 0.00 00:17:34.939 00:17:36.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.313 Nvme0n1 : 9.00 6256.00 24.44 0.00 0.00 0.00 0.00 0.00 00:17:36.313 =================================================================================================================== 00:17:36.313 Total : 6256.00 24.44 0.00 0.00 0.00 0.00 0.00 00:17:36.313 00:17:37.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.287 Nvme0n1 : 10.00 6264.10 24.47 0.00 0.00 0.00 0.00 0.00 00:17:37.287 =================================================================================================================== 00:17:37.287 Total : 6264.10 24.47 0.00 0.00 0.00 0.00 0.00 00:17:37.287 00:17:37.287 00:17:37.287 Latency(us) 00:17:37.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.287 Nvme0n1 : 10.02 6271.76 24.50 0.00 0.00 20391.12 8460.10 46232.67 00:17:37.287 =================================================================================================================== 00:17:37.287 Total : 6271.76 24.50 0.00 0.00 20391.12 8460.10 46232.67 00:17:37.287 0 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 80911 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 80911 ']' 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 80911 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80911 00:17:37.287 killing process with pid 80911 00:17:37.287 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.287 00:17:37.287 Latency(us) 00:17:37.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.287 =================================================================================================================== 00:17:37.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80911' 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 80911 00:17:37.287 00:36:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 80911 00:17:38.661 00:36:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.661 00:36:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:38.919 00:36:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:38.919 00:36:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:39.177 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:39.177 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:39.177 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.435 [2024-07-12 00:36:44.369343] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:39.693 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:39.952 2024/07/12 00:36:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8b01ef07-33e2-4325-a0ba-45a0b71f52ea], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:39.952 request: 00:17:39.952 { 00:17:39.952 "method": "bdev_lvol_get_lvstores", 00:17:39.952 "params": { 00:17:39.952 "uuid": "8b01ef07-33e2-4325-a0ba-45a0b71f52ea" 00:17:39.952 } 00:17:39.952 } 00:17:39.952 Got JSON-RPC error response 00:17:39.952 GoRPCClient: error on JSON-RPC call 00:17:39.952 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:39.952 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.952 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.952 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.952 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:40.210 aio_bdev 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 37136ad1-82e1-4368-8076-9b33a1b60be1 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=37136ad1-82e1-4368-8076-9b33a1b60be1 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.210 00:36:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:40.468 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37136ad1-82e1-4368-8076-9b33a1b60be1 -t 2000 00:17:40.724 [ 00:17:40.724 { 00:17:40.724 "aliases": [ 00:17:40.724 "lvs/lvol" 00:17:40.724 ], 00:17:40.724 "assigned_rate_limits": { 00:17:40.724 "r_mbytes_per_sec": 0, 00:17:40.724 "rw_ios_per_sec": 0, 00:17:40.724 "rw_mbytes_per_sec": 0, 00:17:40.724 "w_mbytes_per_sec": 0 00:17:40.724 }, 00:17:40.724 "block_size": 4096, 00:17:40.724 "claimed": false, 00:17:40.724 "driver_specific": { 00:17:40.725 "lvol": { 00:17:40.725 "base_bdev": "aio_bdev", 00:17:40.725 "clone": false, 00:17:40.725 "esnap_clone": false, 00:17:40.725 "lvol_store_uuid": "8b01ef07-33e2-4325-a0ba-45a0b71f52ea", 00:17:40.725 "num_allocated_clusters": 38, 00:17:40.725 "snapshot": false, 00:17:40.725 "thin_provision": false 00:17:40.725 } 00:17:40.725 }, 00:17:40.725 "name": "37136ad1-82e1-4368-8076-9b33a1b60be1", 00:17:40.725 "num_blocks": 38912, 00:17:40.725 "product_name": "Logical Volume", 00:17:40.725 "supported_io_types": { 00:17:40.725 "abort": false, 00:17:40.725 "compare": false, 00:17:40.725 "compare_and_write": false, 00:17:40.725 "copy": false, 00:17:40.725 "flush": false, 00:17:40.725 "get_zone_info": false, 00:17:40.725 "nvme_admin": false, 00:17:40.725 "nvme_io": false, 00:17:40.725 "nvme_io_md": false, 00:17:40.725 "nvme_iov_md": false, 00:17:40.725 "read": true, 00:17:40.725 "reset": true, 00:17:40.725 "seek_data": true, 00:17:40.725 "seek_hole": true, 00:17:40.725 "unmap": true, 00:17:40.725 "write": true, 00:17:40.725 "write_zeroes": true, 00:17:40.725 "zcopy": false, 00:17:40.725 "zone_append": false, 00:17:40.725 "zone_management": false 00:17:40.725 }, 00:17:40.725 "uuid": "37136ad1-82e1-4368-8076-9b33a1b60be1", 00:17:40.725 "zoned": false 00:17:40.725 } 00:17:40.725 ] 00:17:40.725 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:40.725 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:40.725 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:40.981 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:40.981 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:40.981 00:36:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:41.239 00:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:41.239 00:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 37136ad1-82e1-4368-8076-9b33a1b60be1 00:17:41.805 00:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b01ef07-33e2-4325-a0ba-45a0b71f52ea 00:17:42.063 00:36:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:42.321 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:42.579 ************************************ 00:17:42.579 END TEST lvs_grow_clean 00:17:42.579 ************************************ 00:17:42.579 00:17:42.579 real 0m20.478s 00:17:42.579 user 0m19.645s 00:17:42.579 sys 0m2.535s 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.579 ************************************ 00:17:42.579 START TEST lvs_grow_dirty 00:17:42.579 ************************************ 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:42.579 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.146 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:43.146 00:36:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:43.146 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:43.146 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:43.146 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:43.450 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:43.450 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:43.450 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 lvol 150 00:17:43.707 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:17:43.707 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:43.707 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:43.964 [2024-07-12 00:36:48.896879] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:43.964 [2024-07-12 00:36:48.897016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:44.221 true 00:17:44.221 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:44.221 00:36:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:44.478 00:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:44.478 00:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:44.735 00:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:17:44.992 00:36:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:45.249 [2024-07-12 00:36:50.009701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.249 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=81365 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 81365 /var/tmp/bdevperf.sock 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 81365 ']' 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.506 00:36:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.765 [2024-07-12 00:36:50.480147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:45.765 [2024-07-12 00:36:50.480340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81365 ] 00:17:45.765 [2024-07-12 00:36:50.656443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.331 [2024-07-12 00:36:50.961195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.589 00:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.589 00:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:46.589 00:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:47.154 Nvme0n1 00:17:47.154 00:36:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:47.154 [ 00:17:47.154 { 00:17:47.154 "aliases": [ 00:17:47.154 "be2f15d0-7fed-43c3-9b9a-295a0a7fd774" 00:17:47.154 ], 00:17:47.154 "assigned_rate_limits": { 00:17:47.154 "r_mbytes_per_sec": 0, 00:17:47.154 "rw_ios_per_sec": 0, 00:17:47.154 "rw_mbytes_per_sec": 0, 00:17:47.154 "w_mbytes_per_sec": 0 00:17:47.154 }, 00:17:47.154 "block_size": 4096, 00:17:47.154 "claimed": false, 00:17:47.154 "driver_specific": { 00:17:47.154 "mp_policy": "active_passive", 00:17:47.154 "nvme": [ 00:17:47.154 { 00:17:47.154 "ctrlr_data": { 00:17:47.154 "ana_reporting": false, 00:17:47.154 "cntlid": 1, 00:17:47.154 "firmware_revision": "24.09", 00:17:47.154 "model_number": "SPDK bdev Controller", 00:17:47.154 "multi_ctrlr": true, 00:17:47.154 "oacs": { 00:17:47.154 "firmware": 0, 00:17:47.154 "format": 0, 00:17:47.154 "ns_manage": 0, 00:17:47.154 "security": 0 00:17:47.154 }, 00:17:47.154 "serial_number": "SPDK0", 00:17:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.155 "vendor_id": "0x8086" 00:17:47.155 }, 00:17:47.155 "ns_data": { 00:17:47.155 "can_share": true, 00:17:47.155 "id": 1 00:17:47.155 }, 00:17:47.155 "trid": { 00:17:47.155 "adrfam": "IPv4", 00:17:47.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.155 "traddr": "10.0.0.2", 00:17:47.155 "trsvcid": "4420", 00:17:47.155 "trtype": "TCP" 00:17:47.155 }, 00:17:47.155 "vs": { 00:17:47.155 "nvme_version": "1.3" 00:17:47.155 } 00:17:47.155 } 00:17:47.155 ] 00:17:47.155 }, 00:17:47.155 "memory_domains": [ 00:17:47.155 { 00:17:47.155 "dma_device_id": "system", 00:17:47.155 "dma_device_type": 1 00:17:47.155 } 00:17:47.155 ], 00:17:47.155 "name": "Nvme0n1", 00:17:47.155 "num_blocks": 38912, 00:17:47.155 "product_name": "NVMe disk", 00:17:47.155 "supported_io_types": { 00:17:47.155 "abort": true, 00:17:47.155 "compare": true, 00:17:47.155 "compare_and_write": true, 00:17:47.155 "copy": true, 00:17:47.155 "flush": true, 00:17:47.155 "get_zone_info": false, 00:17:47.155 "nvme_admin": true, 00:17:47.155 "nvme_io": true, 00:17:47.155 "nvme_io_md": false, 00:17:47.155 "nvme_iov_md": false, 00:17:47.155 "read": true, 00:17:47.155 "reset": true, 00:17:47.155 "seek_data": false, 00:17:47.155 "seek_hole": false, 00:17:47.155 "unmap": true, 00:17:47.155 "write": true, 00:17:47.155 "write_zeroes": true, 00:17:47.155 "zcopy": false, 00:17:47.155 "zone_append": false, 00:17:47.155 "zone_management": false 00:17:47.155 }, 00:17:47.155 "uuid": "be2f15d0-7fed-43c3-9b9a-295a0a7fd774", 00:17:47.155 "zoned": false 00:17:47.155 } 00:17:47.155 ] 00:17:47.414 00:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:47.414 00:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=81413 00:17:47.414 00:36:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:47.414 Running I/O for 10 seconds... 00:17:48.347 Latency(us) 00:17:48.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.347 Nvme0n1 : 1.00 6782.00 26.49 0.00 0.00 0.00 0.00 0.00 00:17:48.347 =================================================================================================================== 00:17:48.347 Total : 6782.00 26.49 0.00 0.00 0.00 0.00 0.00 00:17:48.347 00:17:49.281 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:49.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.539 Nvme0n1 : 2.00 6724.50 26.27 0.00 0.00 0.00 0.00 0.00 00:17:49.539 =================================================================================================================== 00:17:49.539 Total : 6724.50 26.27 0.00 0.00 0.00 0.00 0.00 00:17:49.539 00:17:49.539 true 00:17:49.539 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:49.539 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:50.128 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:50.128 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:50.128 00:36:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 81413 00:17:50.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.398 Nvme0n1 : 3.00 6405.33 25.02 0.00 0.00 0.00 0.00 0.00 00:17:50.398 =================================================================================================================== 00:17:50.398 Total : 6405.33 25.02 0.00 0.00 0.00 0.00 0.00 00:17:50.398 00:17:51.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.353 Nvme0n1 : 4.00 6313.25 24.66 0.00 0.00 0.00 0.00 0.00 00:17:51.353 =================================================================================================================== 00:17:51.353 Total : 6313.25 24.66 0.00 0.00 0.00 0.00 0.00 00:17:51.353 00:17:52.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.719 Nvme0n1 : 5.00 6363.80 24.86 0.00 0.00 0.00 0.00 0.00 00:17:52.719 =================================================================================================================== 00:17:52.719 Total : 6363.80 24.86 0.00 0.00 0.00 0.00 0.00 00:17:52.719 00:17:53.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.648 Nvme0n1 : 6.00 6398.33 24.99 0.00 0.00 0.00 0.00 0.00 00:17:53.648 =================================================================================================================== 00:17:53.648 Total : 6398.33 24.99 0.00 0.00 0.00 0.00 0.00 00:17:53.648 00:17:54.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.635 Nvme0n1 : 7.00 6409.14 25.04 0.00 0.00 0.00 0.00 0.00 00:17:54.635 =================================================================================================================== 00:17:54.635 Total : 6409.14 25.04 0.00 0.00 0.00 0.00 0.00 00:17:54.635 00:17:55.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.570 Nvme0n1 : 8.00 6405.62 25.02 0.00 0.00 0.00 0.00 0.00 00:17:55.570 =================================================================================================================== 00:17:55.570 Total : 6405.62 25.02 0.00 0.00 0.00 0.00 0.00 00:17:55.570 00:17:56.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.503 Nvme0n1 : 9.00 6340.22 24.77 0.00 0.00 0.00 0.00 0.00 00:17:56.503 =================================================================================================================== 00:17:56.503 Total : 6340.22 24.77 0.00 0.00 0.00 0.00 0.00 00:17:56.503 00:17:57.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.437 Nvme0n1 : 10.00 6336.10 24.75 0.00 0.00 0.00 0.00 0.00 00:17:57.437 =================================================================================================================== 00:17:57.437 Total : 6336.10 24.75 0.00 0.00 0.00 0.00 0.00 00:17:57.437 00:17:57.437 00:17:57.437 Latency(us) 00:17:57.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.437 Nvme0n1 : 10.01 6340.77 24.77 0.00 0.00 20181.05 3112.96 220200.96 00:17:57.437 =================================================================================================================== 00:17:57.437 Total : 6340.77 24.77 0.00 0.00 20181.05 3112.96 220200.96 00:17:57.437 0 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 81365 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 81365 ']' 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 81365 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81365 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.437 killing process with pid 81365 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81365' 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 81365 00:17:57.437 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.437 00:17:57.437 Latency(us) 00:17:57.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.437 =================================================================================================================== 00:17:57.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.437 00:37:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 81365 00:17:58.814 00:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:59.073 00:37:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:59.346 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:17:59.346 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:59.605 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:59.605 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:59.605 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 80739 00:17:59.605 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 80739 00:17:59.864 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 80739 Killed "${NVMF_APP[@]}" "$@" 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=81589 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 81589 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 81589 ']' 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.864 00:37:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:59.864 [2024-07-12 00:37:04.693275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:59.864 [2024-07-12 00:37:04.693755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.123 [2024-07-12 00:37:04.885160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.382 [2024-07-12 00:37:05.180675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.382 [2024-07-12 00:37:05.180769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.382 [2024-07-12 00:37:05.180787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.382 [2024-07-12 00:37:05.180804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.382 [2024-07-12 00:37:05.180816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.382 [2024-07-12 00:37:05.180869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.040 00:37:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:01.298 [2024-07-12 00:37:06.102143] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:01.298 [2024-07-12 00:37:06.102473] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:01.298 [2024-07-12 00:37:06.102733] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:01.298 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:01.298 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:18:01.298 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:18:01.298 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:01.298 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:01.299 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:01.299 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:01.299 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:01.557 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be2f15d0-7fed-43c3-9b9a-295a0a7fd774 -t 2000 00:18:01.816 [ 00:18:01.816 { 00:18:01.816 "aliases": [ 00:18:01.816 "lvs/lvol" 00:18:01.816 ], 00:18:01.816 "assigned_rate_limits": { 00:18:01.816 "r_mbytes_per_sec": 0, 00:18:01.816 "rw_ios_per_sec": 0, 00:18:01.816 "rw_mbytes_per_sec": 0, 00:18:01.816 "w_mbytes_per_sec": 0 00:18:01.816 }, 00:18:01.816 "block_size": 4096, 00:18:01.816 "claimed": false, 00:18:01.816 "driver_specific": { 00:18:01.816 "lvol": { 00:18:01.816 "base_bdev": "aio_bdev", 00:18:01.816 "clone": false, 00:18:01.816 "esnap_clone": false, 00:18:01.816 "lvol_store_uuid": "6150a794-3025-4ef5-88db-dbdf87d53eb9", 00:18:01.816 "num_allocated_clusters": 38, 00:18:01.816 "snapshot": false, 00:18:01.816 "thin_provision": false 00:18:01.816 } 00:18:01.816 }, 00:18:01.816 "name": "be2f15d0-7fed-43c3-9b9a-295a0a7fd774", 00:18:01.816 "num_blocks": 38912, 00:18:01.816 "product_name": "Logical Volume", 00:18:01.816 "supported_io_types": { 00:18:01.816 "abort": false, 00:18:01.816 "compare": false, 00:18:01.816 "compare_and_write": false, 00:18:01.816 "copy": false, 00:18:01.816 "flush": false, 00:18:01.816 "get_zone_info": false, 00:18:01.816 "nvme_admin": false, 00:18:01.816 "nvme_io": false, 00:18:01.816 "nvme_io_md": false, 00:18:01.816 "nvme_iov_md": false, 00:18:01.816 "read": true, 00:18:01.816 "reset": true, 00:18:01.816 "seek_data": true, 00:18:01.816 "seek_hole": true, 00:18:01.816 "unmap": true, 00:18:01.816 "write": true, 00:18:01.816 "write_zeroes": true, 00:18:01.816 "zcopy": false, 00:18:01.816 "zone_append": false, 00:18:01.816 "zone_management": false 00:18:01.816 }, 00:18:01.816 "uuid": "be2f15d0-7fed-43c3-9b9a-295a0a7fd774", 00:18:01.816 "zoned": false 00:18:01.816 } 00:18:01.816 ] 00:18:01.816 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:01.816 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:01.816 00:37:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:02.382 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:02.382 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:02.382 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:02.382 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:02.382 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:02.640 [2024-07-12 00:37:07.567820] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:02.899 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:03.157 2024/07/12 00:37:07 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6150a794-3025-4ef5-88db-dbdf87d53eb9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:18:03.157 request: 00:18:03.157 { 00:18:03.157 "method": "bdev_lvol_get_lvstores", 00:18:03.157 "params": { 00:18:03.157 "uuid": "6150a794-3025-4ef5-88db-dbdf87d53eb9" 00:18:03.157 } 00:18:03.157 } 00:18:03.157 Got JSON-RPC error response 00:18:03.157 GoRPCClient: error on JSON-RPC call 00:18:03.157 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:03.157 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.157 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.157 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.157 00:37:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.416 aio_bdev 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.416 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:03.674 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be2f15d0-7fed-43c3-9b9a-295a0a7fd774 -t 2000 00:18:03.932 [ 00:18:03.932 { 00:18:03.932 "aliases": [ 00:18:03.932 "lvs/lvol" 00:18:03.932 ], 00:18:03.932 "assigned_rate_limits": { 00:18:03.932 "r_mbytes_per_sec": 0, 00:18:03.932 "rw_ios_per_sec": 0, 00:18:03.932 "rw_mbytes_per_sec": 0, 00:18:03.932 "w_mbytes_per_sec": 0 00:18:03.932 }, 00:18:03.932 "block_size": 4096, 00:18:03.932 "claimed": false, 00:18:03.932 "driver_specific": { 00:18:03.932 "lvol": { 00:18:03.932 "base_bdev": "aio_bdev", 00:18:03.932 "clone": false, 00:18:03.932 "esnap_clone": false, 00:18:03.932 "lvol_store_uuid": "6150a794-3025-4ef5-88db-dbdf87d53eb9", 00:18:03.932 "num_allocated_clusters": 38, 00:18:03.932 "snapshot": false, 00:18:03.932 "thin_provision": false 00:18:03.932 } 00:18:03.932 }, 00:18:03.932 "name": "be2f15d0-7fed-43c3-9b9a-295a0a7fd774", 00:18:03.932 "num_blocks": 38912, 00:18:03.932 "product_name": "Logical Volume", 00:18:03.932 "supported_io_types": { 00:18:03.932 "abort": false, 00:18:03.932 "compare": false, 00:18:03.932 "compare_and_write": false, 00:18:03.932 "copy": false, 00:18:03.932 "flush": false, 00:18:03.932 "get_zone_info": false, 00:18:03.932 "nvme_admin": false, 00:18:03.932 "nvme_io": false, 00:18:03.933 "nvme_io_md": false, 00:18:03.933 "nvme_iov_md": false, 00:18:03.933 "read": true, 00:18:03.933 "reset": true, 00:18:03.933 "seek_data": true, 00:18:03.933 "seek_hole": true, 00:18:03.933 "unmap": true, 00:18:03.933 "write": true, 00:18:03.933 "write_zeroes": true, 00:18:03.933 "zcopy": false, 00:18:03.933 "zone_append": false, 00:18:03.933 "zone_management": false 00:18:03.933 }, 00:18:03.933 "uuid": "be2f15d0-7fed-43c3-9b9a-295a0a7fd774", 00:18:03.933 "zoned": false 00:18:03.933 } 00:18:03.933 ] 00:18:03.933 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:03.933 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:03.933 00:37:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:04.200 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:04.200 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:04.200 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:04.467 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:04.467 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete be2f15d0-7fed-43c3-9b9a-295a0a7fd774 00:18:04.726 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6150a794-3025-4ef5-88db-dbdf87d53eb9 00:18:05.291 00:37:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:05.549 00:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:05.807 ************************************ 00:18:05.807 END TEST lvs_grow_dirty 00:18:05.807 ************************************ 00:18:05.807 00:18:05.807 real 0m23.209s 00:18:05.807 user 0m50.388s 00:18:05.807 sys 0m7.947s 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:05.807 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:05.807 nvmf_trace.0 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.070 rmmod nvme_tcp 00:18:06.070 rmmod nvme_fabrics 00:18:06.070 rmmod nvme_keyring 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 81589 ']' 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 81589 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 81589 ']' 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 81589 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81589 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.070 killing process with pid 81589 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81589' 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 81589 00:18:06.070 00:37:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 81589 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:07.443 00:18:07.443 real 0m47.194s 00:18:07.443 user 1m18.020s 00:18:07.443 sys 0m11.491s 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.443 00:37:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:07.443 ************************************ 00:18:07.443 END TEST nvmf_lvs_grow 00:18:07.443 ************************************ 00:18:07.443 00:37:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:07.443 00:37:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.443 00:37:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:07.443 00:37:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.443 00:37:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:07.443 ************************************ 00:18:07.443 START TEST nvmf_bdev_io_wait 00:18:07.443 ************************************ 00:18:07.443 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.750 * Looking for test storage... 00:18:07.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.750 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:07.751 Cannot find device "nvmf_tgt_br" 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.751 Cannot find device "nvmf_tgt_br2" 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:07.751 Cannot find device "nvmf_tgt_br" 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:18:07.751 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:07.752 Cannot find device "nvmf_tgt_br2" 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.752 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:08.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:08.009 00:18:08.009 --- 10.0.0.2 ping statistics --- 00:18:08.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.009 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:08.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.182 ms 00:18:08.009 00:18:08.009 --- 10.0.0.3 ping statistics --- 00:18:08.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.009 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:08.009 00:18:08.009 --- 10.0.0.1 ping statistics --- 00:18:08.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.009 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=82022 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 82022 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 82022 ']' 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.009 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.010 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.010 00:37:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.280 [2024-07-12 00:37:13.003017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:08.280 [2024-07-12 00:37:13.003198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.280 [2024-07-12 00:37:13.177784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.844 [2024-07-12 00:37:13.543941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.844 [2024-07-12 00:37:13.544423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.844 [2024-07-12 00:37:13.544593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.844 [2024-07-12 00:37:13.544741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.844 [2024-07-12 00:37:13.544863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.844 [2024-07-12 00:37:13.545195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.844 [2024-07-12 00:37:13.545350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.844 [2024-07-12 00:37:13.545836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.844 [2024-07-12 00:37:13.545875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.409 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.668 [2024-07-12 00:37:14.480629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.668 Malloc0 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.668 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.926 [2024-07-12 00:37:14.626269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=82078 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.926 { 00:18:09.926 "params": { 00:18:09.926 "name": "Nvme$subsystem", 00:18:09.926 "trtype": "$TEST_TRANSPORT", 00:18:09.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.926 "adrfam": "ipv4", 00:18:09.926 "trsvcid": "$NVMF_PORT", 00:18:09.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.926 "hdgst": ${hdgst:-false}, 00:18:09.926 "ddgst": ${ddgst:-false} 00:18:09.926 }, 00:18:09.926 "method": "bdev_nvme_attach_controller" 00:18:09.926 } 00:18:09.926 EOF 00:18:09.926 )") 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=82080 00:18:09.926 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.927 { 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme$subsystem", 00:18:09.927 "trtype": "$TEST_TRANSPORT", 00:18:09.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "$NVMF_PORT", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.927 "hdgst": ${hdgst:-false}, 00:18:09.927 "ddgst": ${ddgst:-false} 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 } 00:18:09.927 EOF 00:18:09.927 )") 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=82083 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=82087 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.927 { 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme$subsystem", 00:18:09.927 "trtype": "$TEST_TRANSPORT", 00:18:09.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "$NVMF_PORT", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.927 "hdgst": ${hdgst:-false}, 00:18:09.927 "ddgst": ${ddgst:-false} 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 } 00:18:09.927 EOF 00:18:09.927 )") 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.927 { 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme$subsystem", 00:18:09.927 "trtype": "$TEST_TRANSPORT", 00:18:09.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "$NVMF_PORT", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.927 "hdgst": ${hdgst:-false}, 00:18:09.927 "ddgst": ${ddgst:-false} 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 } 00:18:09.927 EOF 00:18:09.927 )") 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme1", 00:18:09.927 "trtype": "tcp", 00:18:09.927 "traddr": "10.0.0.2", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "4420", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.927 "hdgst": false, 00:18:09.927 "ddgst": false 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 }' 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme1", 00:18:09.927 "trtype": "tcp", 00:18:09.927 "traddr": "10.0.0.2", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "4420", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.927 "hdgst": false, 00:18:09.927 "ddgst": false 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 }' 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme1", 00:18:09.927 "trtype": "tcp", 00:18:09.927 "traddr": "10.0.0.2", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "4420", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.927 "hdgst": false, 00:18:09.927 "ddgst": false 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 }' 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.927 "params": { 00:18:09.927 "name": "Nvme1", 00:18:09.927 "trtype": "tcp", 00:18:09.927 "traddr": "10.0.0.2", 00:18:09.927 "adrfam": "ipv4", 00:18:09.927 "trsvcid": "4420", 00:18:09.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.927 "hdgst": false, 00:18:09.927 "ddgst": false 00:18:09.927 }, 00:18:09.927 "method": "bdev_nvme_attach_controller" 00:18:09.927 }' 00:18:09.927 00:37:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 82078 00:18:09.927 [2024-07-12 00:37:14.791426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:09.927 [2024-07-12 00:37:14.791683] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:09.927 [2024-07-12 00:37:14.815749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:09.927 [2024-07-12 00:37:14.816092] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:09.927 [2024-07-12 00:37:14.839739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:09.927 [2024-07-12 00:37:14.840072] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:10.185 [2024-07-12 00:37:14.862249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:10.185 [2024-07-12 00:37:14.862588] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:10.185 [2024-07-12 00:37:15.073615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.443 [2024-07-12 00:37:15.153042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.443 [2024-07-12 00:37:15.241517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.701 [2024-07-12 00:37:15.396539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:10.701 [2024-07-12 00:37:15.444069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:10.701 [2024-07-12 00:37:15.450542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.701 [2024-07-12 00:37:15.503012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:10.959 [2024-07-12 00:37:15.823296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:10.959 Running I/O for 1 seconds... 00:18:11.217 Running I/O for 1 seconds... 00:18:11.217 Running I/O for 1 seconds... 00:18:11.475 Running I/O for 1 seconds... 00:18:12.041 00:18:12.041 Latency(us) 00:18:12.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.041 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:12.041 Nvme1n1 : 1.00 146520.35 572.35 0.00 0.00 870.44 355.61 2308.65 00:18:12.041 =================================================================================================================== 00:18:12.041 Total : 146520.35 572.35 0.00 0.00 870.44 355.61 2308.65 00:18:12.041 00:18:12.041 Latency(us) 00:18:12.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.041 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:12.041 Nvme1n1 : 1.01 7907.94 30.89 0.00 0.00 16114.98 3768.32 26333.56 00:18:12.041 =================================================================================================================== 00:18:12.041 Total : 7907.94 30.89 0.00 0.00 16114.98 3768.32 26333.56 00:18:12.298 00:18:12.298 Latency(us) 00:18:12.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.298 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:12.298 Nvme1n1 : 1.01 5514.44 21.54 0.00 0.00 23047.74 8698.41 45517.73 00:18:12.298 =================================================================================================================== 00:18:12.298 Total : 5514.44 21.54 0.00 0.00 23047.74 8698.41 45517.73 00:18:12.556 00:18:12.556 Latency(us) 00:18:12.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.556 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:12.556 Nvme1n1 : 1.01 5551.07 21.68 0.00 0.00 22879.61 9889.98 33363.78 00:18:12.556 =================================================================================================================== 00:18:12.556 Total : 5551.07 21.68 0.00 0.00 22879.61 9889.98 33363.78 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 82080 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 82083 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 82087 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.930 rmmod nvme_tcp 00:18:13.930 rmmod nvme_fabrics 00:18:13.930 rmmod nvme_keyring 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 82022 ']' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 82022 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 82022 ']' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 82022 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82022 00:18:13.930 killing process with pid 82022 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82022' 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 82022 00:18:13.930 00:37:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 82022 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.303 00:37:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.303 00:37:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:15.303 00:18:15.303 real 0m7.667s 00:18:15.303 user 0m35.666s 00:18:15.303 sys 0m3.561s 00:18:15.303 00:37:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.303 ************************************ 00:18:15.303 END TEST nvmf_bdev_io_wait 00:18:15.303 00:37:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:15.303 ************************************ 00:18:15.303 00:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:15.303 00:37:20 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:15.303 00:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:15.303 00:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.303 00:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.303 ************************************ 00:18:15.303 START TEST nvmf_queue_depth 00:18:15.303 ************************************ 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:15.303 * Looking for test storage... 00:18:15.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.303 00:37:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:15.304 Cannot find device "nvmf_tgt_br" 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.304 Cannot find device "nvmf_tgt_br2" 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:15.304 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:15.562 Cannot find device "nvmf_tgt_br" 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:15.562 Cannot find device "nvmf_tgt_br2" 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.562 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:15.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:15.821 00:18:15.821 --- 10.0.0.2 ping statistics --- 00:18:15.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.821 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:15.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:15.821 00:18:15.821 --- 10.0.0.3 ping statistics --- 00:18:15.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.821 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:15.821 00:18:15.821 --- 10.0.0.1 ping statistics --- 00:18:15.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.821 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=82350 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 82350 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 82350 ']' 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.821 00:37:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:15.821 [2024-07-12 00:37:20.653615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:15.821 [2024-07-12 00:37:20.653763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.079 [2024-07-12 00:37:20.821111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.338 [2024-07-12 00:37:21.062384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.338 [2024-07-12 00:37:21.062509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.338 [2024-07-12 00:37:21.062540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.338 [2024-07-12 00:37:21.062562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.338 [2024-07-12 00:37:21.062580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.338 [2024-07-12 00:37:21.062639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 [2024-07-12 00:37:21.581257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 Malloc0 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 [2024-07-12 00:37:21.690951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=82397 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 82397 /var/tmp/bdevperf.sock 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 82397 ']' 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.934 00:37:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:16.934 [2024-07-12 00:37:21.832610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:16.935 [2024-07-12 00:37:21.832810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82397 ] 00:18:17.192 [2024-07-12 00:37:22.005910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.450 [2024-07-12 00:37:22.297155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:18.016 NVMe0n1 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.016 00:37:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.016 Running I/O for 10 seconds... 00:18:30.268 00:18:30.268 Latency(us) 00:18:30.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:30.268 Verification LBA range: start 0x0 length 0x4000 00:18:30.268 NVMe0n1 : 10.09 6503.94 25.41 0.00 0.00 156603.81 16324.42 127735.62 00:18:30.268 =================================================================================================================== 00:18:30.268 Total : 6503.94 25.41 0.00 0.00 156603.81 16324.42 127735.62 00:18:30.268 0 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 82397 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 82397 ']' 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 82397 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82397 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82397' 00:18:30.268 killing process with pid 82397 00:18:30.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.268 00:18:30.268 Latency(us) 00:18:30.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.268 =================================================================================================================== 00:18:30.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 82397 00:18:30.268 00:37:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 82397 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.268 rmmod nvme_tcp 00:18:30.268 rmmod nvme_fabrics 00:18:30.268 rmmod nvme_keyring 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 82350 ']' 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 82350 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 82350 ']' 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 82350 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82350 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82350' 00:18:30.268 killing process with pid 82350 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 82350 00:18:30.268 00:37:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 82350 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.834 00:37:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.091 00:37:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.091 00:18:31.091 real 0m15.728s 00:18:31.091 user 0m26.520s 00:18:31.091 sys 0m2.123s 00:18:31.091 00:37:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.091 00:37:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:31.091 ************************************ 00:18:31.091 END TEST nvmf_queue_depth 00:18:31.091 ************************************ 00:18:31.091 00:37:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.091 00:37:35 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:31.091 00:37:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.091 00:37:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.091 00:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.091 ************************************ 00:18:31.091 START TEST nvmf_target_multipath 00:18:31.091 ************************************ 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:31.091 * Looking for test storage... 00:18:31.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.091 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.092 00:37:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.092 Cannot find device "nvmf_tgt_br" 00:18:31.092 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:18:31.092 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.092 Cannot find device "nvmf_tgt_br2" 00:18:31.092 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:18:31.092 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.092 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.092 Cannot find device "nvmf_tgt_br" 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.350 Cannot find device "nvmf_tgt_br2" 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.350 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:31.608 00:18:31.608 --- 10.0.0.2 ping statistics --- 00:18:31.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.608 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:31.608 00:18:31.608 --- 10.0.0.3 ping statistics --- 00:18:31.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.608 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:31.608 00:18:31.608 --- 10.0.0.1 ping statistics --- 00:18:31.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.608 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=82754 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 82754 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 82754 ']' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.608 00:37:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:31.608 [2024-07-12 00:37:36.439839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:31.608 [2024-07-12 00:37:36.440012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.865 [2024-07-12 00:37:36.614817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.123 [2024-07-12 00:37:36.893886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.123 [2024-07-12 00:37:36.894026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.123 [2024-07-12 00:37:36.894045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.123 [2024-07-12 00:37:36.894061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.123 [2024-07-12 00:37:36.894074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.123 [2024-07-12 00:37:36.894315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.123 [2024-07-12 00:37:36.894412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.123 [2024-07-12 00:37:36.895161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.123 [2024-07-12 00:37:36.895166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.689 00:37:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:32.947 [2024-07-12 00:37:37.678210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.947 00:37:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:33.204 Malloc0 00:18:33.204 00:37:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:18:33.461 00:37:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.719 00:37:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.978 [2024-07-12 00:37:38.832615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.978 00:37:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:34.235 [2024-07-12 00:37:39.072873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:34.235 00:37:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:18:34.493 00:37:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:18:34.752 00:37:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.752 00:37:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:18:34.752 00:37:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.752 00:37:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:34.752 00:37:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:18:36.675 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=82897 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:18:36.676 00:37:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:36.676 [global] 00:18:36.676 thread=1 00:18:36.676 invalidate=1 00:18:36.676 rw=randrw 00:18:36.676 time_based=1 00:18:36.676 runtime=6 00:18:36.676 ioengine=libaio 00:18:36.676 direct=1 00:18:36.676 bs=4096 00:18:36.676 iodepth=128 00:18:36.676 norandommap=0 00:18:36.676 numjobs=1 00:18:36.676 00:18:36.676 verify_dump=1 00:18:36.676 verify_backlog=512 00:18:36.676 verify_state_save=0 00:18:36.676 do_verify=1 00:18:36.676 verify=crc32c-intel 00:18:36.676 [job0] 00:18:36.676 filename=/dev/nvme0n1 00:18:36.676 Could not set queue depth (nvme0n1) 00:18:36.934 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:36.934 fio-3.35 00:18:36.934 Starting 1 thread 00:18:37.867 00:37:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:38.126 00:37:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:38.384 00:37:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:39.315 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:39.315 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:39.315 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:39.315 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:39.573 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:40.141 00:37:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:41.074 00:37:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:41.074 00:37:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:41.074 00:37:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:41.074 00:37:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 82897 00:18:42.975 00:18:42.975 job0: (groupid=0, jobs=1): err= 0: pid=82919: Fri Jul 12 00:37:47 2024 00:18:42.976 read: IOPS=8240, BW=32.2MiB/s (33.8MB/s)(193MiB/6004msec) 00:18:42.976 slat (usec): min=2, max=10183, avg=72.14, stdev=342.57 00:18:42.976 clat (usec): min=2983, max=21482, avg=10708.73, stdev=1746.87 00:18:42.976 lat (usec): min=3669, max=21494, avg=10780.87, stdev=1762.30 00:18:42.976 clat percentiles (usec): 00:18:42.976 | 1.00th=[ 6259], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[ 9503], 00:18:42.976 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10945], 00:18:42.976 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12780], 95.00th=[13698], 00:18:42.976 | 99.00th=[16057], 99.50th=[16712], 99.90th=[17957], 99.95th=[18482], 00:18:42.976 | 99.99th=[20055] 00:18:42.976 bw ( KiB/s): min= 5176, max=20400, per=53.03%, avg=17480.64, stdev=4309.08, samples=11 00:18:42.976 iops : min= 1294, max= 5100, avg=4370.09, stdev=1077.26, samples=11 00:18:42.976 write: IOPS=4673, BW=18.3MiB/s (19.1MB/s)(97.0MiB/5311msec); 0 zone resets 00:18:42.976 slat (usec): min=4, max=2784, avg=86.21, stdev=234.65 00:18:42.976 clat (usec): min=939, max=19792, avg=9341.59, stdev=1395.09 00:18:42.976 lat (usec): min=1492, max=19829, avg=9427.80, stdev=1400.74 00:18:42.976 clat percentiles (usec): 00:18:42.976 | 1.00th=[ 5145], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[ 8455], 00:18:42.976 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:18:42.976 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10683], 95.00th=[11207], 00:18:42.976 | 99.00th=[13435], 99.50th=[14615], 99.90th=[17171], 99.95th=[17695], 00:18:42.976 | 99.99th=[19792] 00:18:42.976 bw ( KiB/s): min= 5016, max=20480, per=93.34%, avg=17451.64, stdev=4306.25, samples=11 00:18:42.976 iops : min= 1254, max= 5120, avg=4363.00, stdev=1076.58, samples=11 00:18:42.976 lat (usec) : 1000=0.01% 00:18:42.976 lat (msec) : 2=0.01%, 4=0.08%, 10=48.69%, 20=51.22%, 50=0.01% 00:18:42.976 cpu : usr=4.33%, sys=19.39%, ctx=4630, majf=0, minf=84 00:18:42.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:42.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.976 issued rwts: total=49475,24823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.976 00:18:42.976 Run status group 0 (all jobs): 00:18:42.976 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=193MiB (203MB), run=6004-6004msec 00:18:42.976 WRITE: bw=18.3MiB/s (19.1MB/s), 18.3MiB/s-18.3MiB/s (19.1MB/s-19.1MB/s), io=97.0MiB (102MB), run=5311-5311msec 00:18:42.976 00:18:42.976 Disk stats (read/write): 00:18:42.976 nvme0n1: ios=48190/24823, merge=0/0, ticks=489089/218246, in_queue=707335, util=98.63% 00:18:42.976 00:37:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:43.234 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:18:43.800 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:43.801 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:43.801 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:43.801 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:43.801 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:18:43.801 00:37:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:44.734 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=83047 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:44.735 00:37:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:18:44.735 [global] 00:18:44.735 thread=1 00:18:44.735 invalidate=1 00:18:44.735 rw=randrw 00:18:44.735 time_based=1 00:18:44.735 runtime=6 00:18:44.735 ioengine=libaio 00:18:44.735 direct=1 00:18:44.735 bs=4096 00:18:44.735 iodepth=128 00:18:44.735 norandommap=0 00:18:44.735 numjobs=1 00:18:44.735 00:18:44.735 verify_dump=1 00:18:44.735 verify_backlog=512 00:18:44.735 verify_state_save=0 00:18:44.735 do_verify=1 00:18:44.735 verify=crc32c-intel 00:18:44.735 [job0] 00:18:44.735 filename=/dev/nvme0n1 00:18:44.735 Could not set queue depth (nvme0n1) 00:18:44.735 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:44.735 fio-3.35 00:18:44.735 Starting 1 thread 00:18:45.669 00:37:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:45.928 00:37:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:46.187 00:37:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:47.560 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:47.560 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:47.560 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:47.560 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:47.560 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:47.818 00:37:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:48.751 00:37:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:48.751 00:37:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:48.751 00:37:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:48.751 00:37:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 83047 00:18:51.278 00:18:51.278 job0: (groupid=0, jobs=1): err= 0: pid=83068: Fri Jul 12 00:37:55 2024 00:18:51.278 read: IOPS=9401, BW=36.7MiB/s (38.5MB/s)(221MiB/6007msec) 00:18:51.278 slat (usec): min=4, max=8940, avg=55.56, stdev=283.07 00:18:51.278 clat (usec): min=310, max=18146, avg=9434.29, stdev=2140.98 00:18:51.278 lat (usec): min=424, max=18160, avg=9489.85, stdev=2168.94 00:18:51.278 clat percentiles (usec): 00:18:51.278 | 1.00th=[ 4293], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7308], 00:18:51.278 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:18:51.278 | 70.00th=[10421], 80.00th=[11076], 90.00th=[11731], 95.00th=[12387], 00:18:51.278 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16581], 99.95th=[17171], 00:18:51.278 | 99.99th=[17957] 00:18:51.278 bw ( KiB/s): min= 592, max=37232, per=51.40%, avg=19331.33, stdev=9403.51, samples=12 00:18:51.278 iops : min= 148, max= 9308, avg=4832.83, stdev=2350.88, samples=12 00:18:51.278 write: IOPS=5757, BW=22.5MiB/s (23.6MB/s)(114MiB/5072msec); 0 zone resets 00:18:51.278 slat (usec): min=15, max=6069, avg=65.81, stdev=190.53 00:18:51.278 clat (usec): min=306, max=17136, avg=7841.28, stdev=2227.74 00:18:51.278 lat (usec): min=354, max=17160, avg=7907.10, stdev=2248.48 00:18:51.278 clat percentiles (usec): 00:18:51.278 | 1.00th=[ 2802], 5.00th=[ 4146], 10.00th=[ 4621], 20.00th=[ 5342], 00:18:51.278 | 30.00th=[ 6325], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 8979], 00:18:51.278 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:18:51.278 | 99.00th=[12911], 99.50th=[13698], 99.90th=[15664], 99.95th=[15926], 00:18:51.278 | 99.99th=[16581] 00:18:51.278 bw ( KiB/s): min= 704, max=36864, per=84.40%, avg=19438.00, stdev=9363.61, samples=12 00:18:51.278 iops : min= 176, max= 9216, avg=4859.50, stdev=2340.90, samples=12 00:18:51.278 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:18:51.278 lat (msec) : 2=0.19%, 4=1.75%, 10=69.36%, 20=28.68% 00:18:51.278 cpu : usr=4.68%, sys=21.84%, ctx=5653, majf=0, minf=121 00:18:51.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:51.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.278 issued rwts: total=56477,29202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.278 00:18:51.278 Run status group 0 (all jobs): 00:18:51.278 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=221MiB (231MB), run=6007-6007msec 00:18:51.278 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=114MiB (120MB), run=5072-5072msec 00:18:51.278 00:18:51.278 Disk stats (read/write): 00:18:51.278 nvme0n1: ios=55687/28749, merge=0/0, ticks=493219/207779, in_queue=700998, util=98.60% 00:18:51.278 00:37:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:51.278 00:37:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:18:51.279 00:37:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.537 rmmod nvme_tcp 00:18:51.537 rmmod nvme_fabrics 00:18:51.537 rmmod nvme_keyring 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 82754 ']' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 82754 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 82754 ']' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 82754 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82754 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:51.537 killing process with pid 82754 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82754' 00:18:51.537 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 82754 00:18:51.538 00:37:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 82754 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:52.911 00:18:52.911 real 0m21.924s 00:18:52.911 user 1m24.007s 00:18:52.911 sys 0m6.103s 00:18:52.911 00:37:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.911 ************************************ 00:18:52.911 END TEST nvmf_target_multipath 00:18:52.912 00:37:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:52.912 ************************************ 00:18:52.912 00:37:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:52.912 00:37:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:52.912 00:37:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.912 00:37:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.912 00:37:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.912 ************************************ 00:18:52.912 START TEST nvmf_zcopy 00:18:52.912 ************************************ 00:18:52.912 00:37:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:53.170 * Looking for test storage... 00:18:53.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:53.170 Cannot find device "nvmf_tgt_br" 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:53.170 Cannot find device "nvmf_tgt_br2" 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:53.170 Cannot find device "nvmf_tgt_br" 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:53.170 Cannot find device "nvmf_tgt_br2" 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:18:53.170 00:37:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:53.170 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:53.171 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:53.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:18:53.429 00:18:53.429 --- 10.0.0.2 ping statistics --- 00:18:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.429 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:53.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:53.429 00:18:53.429 --- 10.0.0.3 ping statistics --- 00:18:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.429 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:18:53.429 00:18:53.429 --- 10.0.0.1 ping statistics --- 00:18:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.429 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.429 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=83353 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 83353 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 83353 ']' 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.430 00:37:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.688 [2024-07-12 00:37:58.417442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:53.688 [2024-07-12 00:37:58.417610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.688 [2024-07-12 00:37:58.586590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.954 [2024-07-12 00:37:58.836655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.954 [2024-07-12 00:37:58.836776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.954 [2024-07-12 00:37:58.836795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.954 [2024-07-12 00:37:58.836811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.954 [2024-07-12 00:37:58.836823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.954 [2024-07-12 00:37:58.836868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.520 [2024-07-12 00:37:59.427874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.520 [2024-07-12 00:37:59.444024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.520 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.779 malloc0 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:54.779 { 00:18:54.779 "params": { 00:18:54.779 "name": "Nvme$subsystem", 00:18:54.779 "trtype": "$TEST_TRANSPORT", 00:18:54.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.779 "adrfam": "ipv4", 00:18:54.779 "trsvcid": "$NVMF_PORT", 00:18:54.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.779 "hdgst": ${hdgst:-false}, 00:18:54.779 "ddgst": ${ddgst:-false} 00:18:54.779 }, 00:18:54.779 "method": "bdev_nvme_attach_controller" 00:18:54.779 } 00:18:54.779 EOF 00:18:54.779 )") 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:54.779 00:37:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:54.779 "params": { 00:18:54.779 "name": "Nvme1", 00:18:54.779 "trtype": "tcp", 00:18:54.779 "traddr": "10.0.0.2", 00:18:54.779 "adrfam": "ipv4", 00:18:54.779 "trsvcid": "4420", 00:18:54.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.779 "hdgst": false, 00:18:54.779 "ddgst": false 00:18:54.779 }, 00:18:54.779 "method": "bdev_nvme_attach_controller" 00:18:54.779 }' 00:18:54.779 [2024-07-12 00:37:59.610742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:54.779 [2024-07-12 00:37:59.610904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83404 ] 00:18:55.038 [2024-07-12 00:37:59.783852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.296 [2024-07-12 00:38:00.072563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.554 Running I/O for 10 seconds... 00:19:05.604 00:19:05.604 Latency(us) 00:19:05.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:05.604 Verification LBA range: start 0x0 length 0x1000 00:19:05.604 Nvme1n1 : 10.02 4483.45 35.03 0.00 0.00 28468.09 4498.15 35508.60 00:19:05.604 =================================================================================================================== 00:19:05.604 Total : 4483.45 35.03 0.00 0.00 28468.09 4498.15 35508.60 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=83533 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:06.981 { 00:19:06.981 "params": { 00:19:06.981 "name": "Nvme$subsystem", 00:19:06.981 "trtype": "$TEST_TRANSPORT", 00:19:06.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.981 "adrfam": "ipv4", 00:19:06.981 "trsvcid": "$NVMF_PORT", 00:19:06.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.981 "hdgst": ${hdgst:-false}, 00:19:06.981 "ddgst": ${ddgst:-false} 00:19:06.981 }, 00:19:06.981 "method": "bdev_nvme_attach_controller" 00:19:06.981 } 00:19:06.981 EOF 00:19:06.981 )") 00:19:06.981 [2024-07-12 00:38:11.715230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.715292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:06.981 00:38:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:06.981 "params": { 00:19:06.981 "name": "Nvme1", 00:19:06.981 "trtype": "tcp", 00:19:06.981 "traddr": "10.0.0.2", 00:19:06.981 "adrfam": "ipv4", 00:19:06.981 "trsvcid": "4420", 00:19:06.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.981 "hdgst": false, 00:19:06.981 "ddgst": false 00:19:06.981 }, 00:19:06.981 "method": "bdev_nvme_attach_controller" 00:19:06.981 }' 00:19:06.981 [2024-07-12 00:38:11.727225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.727276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.735159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.735200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.743184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.743225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.751184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.752019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.759188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.759352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.767192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.767351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.775180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.775341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.783174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.783327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.791224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.791377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.799180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.799333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 [2024-07-12 00:38:11.803054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:06.981 [2024-07-12 00:38:11.803321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83533 ] 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.811210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.811364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.819217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.819389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.827189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.827342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.835206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.835251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.843211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.843252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.851197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.851238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.859224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.859264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.867222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.867382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.875224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.875367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.981 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.981 [2024-07-12 00:38:11.887267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.981 [2024-07-12 00:38:11.887426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.982 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.982 [2024-07-12 00:38:11.895246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.982 [2024-07-12 00:38:11.895409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.982 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:06.982 [2024-07-12 00:38:11.903227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.982 [2024-07-12 00:38:11.903378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.982 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.915251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.915411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.927236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.927277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.939301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.939352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.951260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.951307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.959259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.959437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.967264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.967426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 [2024-07-12 00:38:11.968264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.975248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.975417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.240 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.240 [2024-07-12 00:38:11.983297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.240 [2024-07-12 00:38:11.983498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:11.991287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:11.991455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:11.999255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:11.999418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.007304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.007467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.015268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.015429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.023287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.023458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.031290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.031458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.039265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.039426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.047299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.047457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.055300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.055471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.063280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.063453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.071302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.071465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.079285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.079326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.087313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.087355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.095323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.095370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.107396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.107598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.115325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.115499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.127362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.127536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.135309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.135482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.143327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.143498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.151331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.151496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.159343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.159384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.241 [2024-07-12 00:38:12.167335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.241 [2024-07-12 00:38:12.167375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.241 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.175320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.175359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.183346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.183523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.195374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.195549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.203346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.203515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.209497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.499 [2024-07-12 00:38:12.211410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.211561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.219360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.219533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.231471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.231671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.239417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.239462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.247373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.247433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.255387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.255439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.263388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.263561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.271378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.271542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.279415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.279565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.287386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.287554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.299488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.299706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.311492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.311692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.323474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.323518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.335433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.335472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.347439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.347478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.359422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.359588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.371456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.499 [2024-07-12 00:38:12.371615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.499 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.499 [2024-07-12 00:38:12.383432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.500 [2024-07-12 00:38:12.383584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.500 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.500 [2024-07-12 00:38:12.395462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.500 [2024-07-12 00:38:12.395612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.500 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.500 [2024-07-12 00:38:12.407461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.500 [2024-07-12 00:38:12.407612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.500 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.500 [2024-07-12 00:38:12.419444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.500 [2024-07-12 00:38:12.419593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.500 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.500 [2024-07-12 00:38:12.431530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.500 [2024-07-12 00:38:12.431745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.443540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.443727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.451443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.451590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.459486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.459635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.467447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.467607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.475466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.475615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.483490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.483639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.491454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.491493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.499475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.499514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.507475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.507514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.515469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.515627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.523491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.523642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.531476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.531623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.539658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.539819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.547700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.547854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.758 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.758 [2024-07-12 00:38:12.555680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.758 [2024-07-12 00:38:12.555843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.563681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.563835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.575726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.575770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.583670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.583712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.591702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.591860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.599709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.599868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.607714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.607869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 Running I/O for 5 seconds... 00:19:07.759 [2024-07-12 00:38:12.615764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.615924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.632369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.632544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.649277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.649331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.662960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.663007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:07.759 [2024-07-12 00:38:12.679866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.759 [2024-07-12 00:38:12.679912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.759 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.697396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.697603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.711054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.711218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.728093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.728262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.745329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.745529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.758384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.758564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.777221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.017 [2024-07-12 00:38:12.777423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.017 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.017 [2024-07-12 00:38:12.791581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.791774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.808942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.808991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.826493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.826542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.839536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.839583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.859220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.859452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.876095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.876263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.888605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.888770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.905833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.905998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.924002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.924208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.018 [2024-07-12 00:38:12.940473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.018 [2024-07-12 00:38:12.940662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.018 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:12.959233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:12.959439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:12.976001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:12.976093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:12.989056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:12.989101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.006207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.006255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.023421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.023470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.040861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.041040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.056867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.057037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.073223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.073418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.089711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.089909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.105712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.105910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.122995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.123043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.139963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.140011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.157409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.157468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.173599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.173806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.185351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.185527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.276 [2024-07-12 00:38:13.201065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.276 [2024-07-12 00:38:13.201290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.276 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.218230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.218437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.231838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.232015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.249554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.249751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.263583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.263629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.278978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.279025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.297899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.297948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.311027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.311209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.328219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.328403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.346222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.346431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.364642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.364807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.377301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.377511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.395171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.395365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.408249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.408426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.423518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.423711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.440919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.440966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.535 [2024-07-12 00:38:13.457554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.535 [2024-07-12 00:38:13.457600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.535 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.470411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.470471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.489677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.489852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.507067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.507240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.523393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.523575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.536490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.536660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.552070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.552235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.568959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.569215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.586097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.586291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.602245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.602443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.618456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.618636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.636969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.637184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.653887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.654071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.667453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.667499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.682514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.682563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.696957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.697004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.795 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.795 [2024-07-12 00:38:13.713656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.795 [2024-07-12 00:38:13.713828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.796 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:08.796 [2024-07-12 00:38:13.727181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.796 [2024-07-12 00:38:13.727344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.125 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.125 [2024-07-12 00:38:13.744590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.125 [2024-07-12 00:38:13.744757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.125 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.125 [2024-07-12 00:38:13.762490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.125 [2024-07-12 00:38:13.762667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.125 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.125 [2024-07-12 00:38:13.779290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.125 [2024-07-12 00:38:13.779470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.125 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.125 [2024-07-12 00:38:13.791335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.125 [2024-07-12 00:38:13.791518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.125 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.805332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.805381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.819569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.819619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.836090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.836141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.851846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.851894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.867619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.867800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.880497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.880664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.898618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.898786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.914994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.915162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.931164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.931323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.948592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.948759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.960496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.960665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.976926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.977095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:13.994473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:13.994643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:14.008017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:14.008208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:14.025005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:14.025180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:14.042563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:14.042757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.126 [2024-07-12 00:38:14.054868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.126 [2024-07-12 00:38:14.055045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.126 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.069370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.069561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.088054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.088232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.102211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.102378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.117790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.117844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.137044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.137104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.154148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.154210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.166763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.166950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.185668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.185849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.203799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.203979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.216972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.217153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.232360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.232547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.249516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.249685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.267298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.267481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.284661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.284833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.300835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.301006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.386 [2024-07-12 00:38:14.313454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.386 [2024-07-12 00:38:14.313555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.386 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.326256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.326449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.340338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.340540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.354805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.354970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.371594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.371762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.388812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.388990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.401250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.401428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.414135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.414185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.431304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.431353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.447404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.447487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.465142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.465362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.478768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.478937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.496099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.496271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.513722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.513908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.646 [2024-07-12 00:38:14.527517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.646 [2024-07-12 00:38:14.527697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.646 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.647 [2024-07-12 00:38:14.542289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.647 [2024-07-12 00:38:14.542491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.647 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.647 [2024-07-12 00:38:14.559394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.647 [2024-07-12 00:38:14.559586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.647 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.647 [2024-07-12 00:38:14.576975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.647 [2024-07-12 00:38:14.577141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.905 [2024-07-12 00:38:14.589583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.905 [2024-07-12 00:38:14.589746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.905 [2024-07-12 00:38:14.607735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.905 [2024-07-12 00:38:14.607899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.905 [2024-07-12 00:38:14.621228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.905 [2024-07-12 00:38:14.621421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.905 [2024-07-12 00:38:14.638793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.905 [2024-07-12 00:38:14.638964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.905 [2024-07-12 00:38:14.656375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.905 [2024-07-12 00:38:14.656436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.905 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.669602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.669650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.688250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.688305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.704559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.704606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.716207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.716381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.734026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.734203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.750645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.750877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.767315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.767540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.779578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.779743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.795847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.796014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.810262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.810476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:09.906 [2024-07-12 00:38:14.824480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.906 [2024-07-12 00:38:14.824687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.906 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.841789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.841850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.855909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.855956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.872551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.872595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.890241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.890425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.903723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.903931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.921691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.921856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.939368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.939584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.956670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.956837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.974160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.974352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:14.990478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:14.990652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.003756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.003919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.019376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.019441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.035899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.035945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.052082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.052130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.064087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.064259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.081165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.081340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.166 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.166 [2024-07-12 00:38:15.097714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.166 [2024-07-12 00:38:15.097885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.425 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.425 [2024-07-12 00:38:15.109858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.425 [2024-07-12 00:38:15.110021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.425 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.425 [2024-07-12 00:38:15.127347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.127553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.144680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.144850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.157696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.157863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.175634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.175802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.192530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.192576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.205705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.205752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.222742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.222791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.240102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.240153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.253618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.253796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.271368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.271564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.288802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.289009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.304916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.305084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.317777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.317943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.335869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.336038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.426 [2024-07-12 00:38:15.352303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.426 [2024-07-12 00:38:15.352495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.426 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.685 [2024-07-12 00:38:15.364390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.685 [2024-07-12 00:38:15.364568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.685 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.685 [2024-07-12 00:38:15.382164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.685 [2024-07-12 00:38:15.382213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.685 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.685 [2024-07-12 00:38:15.398862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.685 [2024-07-12 00:38:15.398911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.685 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.685 [2024-07-12 00:38:15.412041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.685 [2024-07-12 00:38:15.412099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.685 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.685 [2024-07-12 00:38:15.429480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.685 [2024-07-12 00:38:15.429653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.446109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.446281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.458885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.459051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.476789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.476957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.493363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.493544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.509500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.509665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.527572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.527621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.543783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.543837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.556064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.556112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.568852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.568902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.582728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.582781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.599705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.599756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.686 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.686 [2024-07-12 00:38:15.616292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.686 [2024-07-12 00:38:15.616343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.944 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.944 [2024-07-12 00:38:15.633431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.944 [2024-07-12 00:38:15.633482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.649508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.649689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.666646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.666825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.684025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.684205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.700408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.700600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.718851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.719034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.736567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.736749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.749368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.749543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.764571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.764620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.781607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.781655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.798991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.799058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.815295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.815344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.832498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.832544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.848231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.848281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:10.945 [2024-07-12 00:38:15.865560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.945 [2024-07-12 00:38:15.865605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.945 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.204 [2024-07-12 00:38:15.883186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.204 [2024-07-12 00:38:15.883234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.204 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.204 [2024-07-12 00:38:15.900468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.204 [2024-07-12 00:38:15.900515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.204 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.204 [2024-07-12 00:38:15.918185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.204 [2024-07-12 00:38:15.918233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.204 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:15.935071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:15.935120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:15.948209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:15.948257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:15.967023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:15.967073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:15.983696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:15.983744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.001513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.001559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.018758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.018804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.034549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.034595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.052311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.052359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.069133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.069180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.085909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.085956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.104147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.104197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.205 [2024-07-12 00:38:16.122931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.205 [2024-07-12 00:38:16.122980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.205 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.141699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.141762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.159503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.159552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.175180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.175228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.188175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.188225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.207008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.207056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.223590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.223639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.240629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.240675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.256701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.256748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.274029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.274077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.290921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.290968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.307968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.308018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.325107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.325155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.341132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.341197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.358714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.358766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.375312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.375362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.464 [2024-07-12 00:38:16.392959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.464 [2024-07-12 00:38:16.393009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.464 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.410252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.410300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.427583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.427631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.444758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.444809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.457809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.457875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.475933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.475984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.492227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.492277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.504966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.505015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.522608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.522657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.539882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.539933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.557147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.557201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.723 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.723 [2024-07-12 00:38:16.573325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.723 [2024-07-12 00:38:16.573375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.724 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.724 [2024-07-12 00:38:16.589788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.724 [2024-07-12 00:38:16.589867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.724 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.724 [2024-07-12 00:38:16.606095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.724 [2024-07-12 00:38:16.606143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.724 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.724 [2024-07-12 00:38:16.624425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.724 [2024-07-12 00:38:16.624472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.724 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.724 [2024-07-12 00:38:16.642003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.724 [2024-07-12 00:38:16.642051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.724 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.659689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.659737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.676065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.676114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.692927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.692977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.705945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.705992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.724353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.724457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.741973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.742023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.757717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.757767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.773675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.773725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.790562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.790625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.806520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.806586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.823935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.824008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.839569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.839619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.857303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.857359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.874943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.875026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.888040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.888111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.982 [2024-07-12 00:38:16.906047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.982 [2024-07-12 00:38:16.906103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.982 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.241 [2024-07-12 00:38:16.922264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.241 [2024-07-12 00:38:16.922334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:16.939734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:16.939787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:16.952788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:16.952838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:16.970727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:16.970779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:16.986930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:16.986979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:16.999306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:16.999387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.014703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.014771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.031890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.031942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.048360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.048450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.066517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.066582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.079516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.079565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.097823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.097875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.114473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.114552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.131975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.132027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.149711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.149762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.242 [2024-07-12 00:38:17.165923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.242 [2024-07-12 00:38:17.165974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.242 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.182947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.183000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.199701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.199754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.212771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.212822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.231127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.231181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.248945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.249014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.265123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.265205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.277628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.277694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.297908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.500 [2024-07-12 00:38:17.297977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.500 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.500 [2024-07-12 00:38:17.314351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.314457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.330459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.330510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.347516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.347575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.365357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.365437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.383164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.383215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.398675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.398741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.415945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.415994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.501 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.501 [2024-07-12 00:38:17.433186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.501 [2024-07-12 00:38:17.433252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.449453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.449535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.465652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.465716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.481642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.481694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.497694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.497758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.509971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.510035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.525933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.525996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.543044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.543126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.560513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.560576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.577927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.577984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.595870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.595924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.613871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.613923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.625927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.625975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 00:19:12.759 Latency(us) 00:19:12.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.759 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:12.759 Nvme1n1 : 5.01 8875.85 69.34 0.00 0.00 14400.19 5659.93 26214.40 00:19:12.759 =================================================================================================================== 00:19:12.759 Total : 8875.85 69.34 0.00 0.00 14400.19 5659.93 26214.40 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.638035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.638084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.650014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.650061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.661987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.662030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.674124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.674187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:12.759 [2024-07-12 00:38:17.686044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:12.759 [2024-07-12 00:38:17.686095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:12.759 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.697992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.698048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.710033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.710078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.721993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.722035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.734024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.734068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.746081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.746136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.758068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.758125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.770051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.770098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.782050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.782094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.794016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.794075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.806057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.806101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.818033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.818091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.826065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.826104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.838074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.838116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.017 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.017 [2024-07-12 00:38:17.850070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.017 [2024-07-12 00:38:17.850145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.862070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.862112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.874067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.874111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.886115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.886163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.898148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.898203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.910082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.910125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.922083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.922126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.934099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.934150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.018 [2024-07-12 00:38:17.946102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.018 [2024-07-12 00:38:17.946152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.018 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:17.958187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:17.958278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:17.970179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:17.970238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:17.982103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:17.982145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:17.994132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:17.994172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.006091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.006130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.018150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.018193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.030126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.030187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.042112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.042170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.054127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.054186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.066208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.066266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.078205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.078265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.090229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.090311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.102160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.102218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.114174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.114234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.126196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.126254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.138196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.138256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.150201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.150252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.162186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.162246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.174173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.174215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.186223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.186266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.198175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.198217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.277 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.277 [2024-07-12 00:38:18.210198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.277 [2024-07-12 00:38:18.210241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.222204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.222247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.234219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.234276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.246206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.246247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.258206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.258248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.270194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.270236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.282273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.282333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.294215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.294273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.306272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.306323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.318304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.318364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.330263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.330325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.342328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.342389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.354272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.354332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.366242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.366285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.378280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.378337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.390264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.390339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.402275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.402331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.414355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.414429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.426299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.426341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.438288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.438347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.450449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.450508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.536 [2024-07-12 00:38:18.462284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.536 [2024-07-12 00:38:18.462338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.536 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.474307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.474355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.486288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.486328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.494318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.494362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.506358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.506428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.518321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.518366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.530439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.530495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.542421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.542475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.554309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.554350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.566360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.566423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.578331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.578373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.590338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.590379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.602367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.602425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.614324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.614382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.626369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.626447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.638451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.638508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.650343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.650426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.662363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.662435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.674372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.674450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.686492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.686544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.698379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.698448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.710386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.710451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:13.795 [2024-07-12 00:38:18.722412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:13.795 [2024-07-12 00:38:18.722482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:13.795 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.260 [2024-07-12 00:38:18.734454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.260 [2024-07-12 00:38:18.734507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.260 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.260 [2024-07-12 00:38:18.746377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.260 [2024-07-12 00:38:18.746446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.260 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.260 [2024-07-12 00:38:18.758425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.260 [2024-07-12 00:38:18.758477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.260 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.260 [2024-07-12 00:38:18.770389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.260 [2024-07-12 00:38:18.770446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.260 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.260 [2024-07-12 00:38:18.782461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.260 [2024-07-12 00:38:18.782528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.260 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.261 [2024-07-12 00:38:18.794455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.261 [2024-07-12 00:38:18.794498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.261 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.261 [2024-07-12 00:38:18.806446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.261 [2024-07-12 00:38:18.806492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.261 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.261 [2024-07-12 00:38:18.818516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.261 [2024-07-12 00:38:18.818570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.261 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.261 [2024-07-12 00:38:18.830462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:14.261 [2024-07-12 00:38:18.830502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:14.261 2024/07/12 00:38:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:14.261 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (83533) - No such process 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 83533 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.261 delay0 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.261 00:38:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:14.261 [2024-07-12 00:38:19.096627] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:20.839 Initializing NVMe Controllers 00:19:20.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:20.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:20.839 Initialization complete. Launching workers. 00:19:20.839 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 250 00:19:20.839 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 537, failed to submit 33 00:19:20.839 success 381, unsuccess 156, failed 0 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.839 rmmod nvme_tcp 00:19:20.839 rmmod nvme_fabrics 00:19:20.839 rmmod nvme_keyring 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 83353 ']' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 83353 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 83353 ']' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 83353 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83353 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.839 killing process with pid 83353 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83353' 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 83353 00:19:20.839 00:38:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 83353 00:19:22.215 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.215 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.215 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:22.216 00:19:22.216 real 0m28.989s 00:19:22.216 user 0m47.638s 00:19:22.216 sys 0m6.690s 00:19:22.216 ************************************ 00:19:22.216 END TEST nvmf_zcopy 00:19:22.216 ************************************ 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.216 00:38:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:22.216 00:38:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:22.216 00:38:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.216 00:38:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.216 00:38:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.216 00:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:22.216 ************************************ 00:19:22.216 START TEST nvmf_nmic 00:19:22.216 ************************************ 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.216 * Looking for test storage... 00:19:22.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.216 00:38:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:22.216 Cannot find device "nvmf_tgt_br" 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.216 Cannot find device "nvmf_tgt_br2" 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:22.216 Cannot find device "nvmf_tgt_br" 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:22.216 Cannot find device "nvmf_tgt_br2" 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.216 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:22.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:22.476 00:19:22.476 --- 10.0.0.2 ping statistics --- 00:19:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.476 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:22.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:22.476 00:19:22.476 --- 10.0.0.3 ping statistics --- 00:19:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.476 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:22.476 00:19:22.476 --- 10.0.0.1 ping statistics --- 00:19:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.476 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=83893 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 83893 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 83893 ']' 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.476 00:38:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:22.735 [2024-07-12 00:38:27.479302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:22.735 [2024-07-12 00:38:27.479504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.735 [2024-07-12 00:38:27.652222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.302 [2024-07-12 00:38:27.950234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.302 [2024-07-12 00:38:27.950324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.302 [2024-07-12 00:38:27.950358] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.302 [2024-07-12 00:38:27.950377] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.302 [2024-07-12 00:38:27.950422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.302 [2024-07-12 00:38:27.950957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.302 [2024-07-12 00:38:27.951149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.302 [2024-07-12 00:38:27.951557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.302 [2024-07-12 00:38:27.951560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.560 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.560 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:23.560 00:38:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.560 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.560 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 [2024-07-12 00:38:28.506461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 Malloc0 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 [2024-07-12 00:38:28.635042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.819 test case1: single bdev can't be used in multiple subsystems 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:23.819 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 [2024-07-12 00:38:28.670916] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:23.820 [2024-07-12 00:38:28.670982] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:23.820 [2024-07-12 00:38:28.671012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.820 2024/07/12 00:38:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:23.820 request: 00:19:23.820 { 00:19:23.820 "method": "nvmf_subsystem_add_ns", 00:19:23.820 "params": { 00:19:23.820 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:23.820 "namespace": { 00:19:23.820 "bdev_name": "Malloc0", 00:19:23.820 "no_auto_visible": false 00:19:23.820 } 00:19:23.820 } 00:19:23.820 } 00:19:23.820 Got JSON-RPC error response 00:19:23.820 GoRPCClient: error on JSON-RPC call 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:23.820 Adding namespace failed - expected result. 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:23.820 test case2: host connect to nvmf target in multiple paths 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:23.820 [2024-07-12 00:38:28.687079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.820 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:24.078 00:38:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:24.336 00:38:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:24.336 00:38:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:24.336 00:38:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:24.336 00:38:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:24.336 00:38:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:26.303 00:38:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:26.303 [global] 00:19:26.303 thread=1 00:19:26.303 invalidate=1 00:19:26.303 rw=write 00:19:26.303 time_based=1 00:19:26.303 runtime=1 00:19:26.303 ioengine=libaio 00:19:26.303 direct=1 00:19:26.303 bs=4096 00:19:26.303 iodepth=1 00:19:26.303 norandommap=0 00:19:26.303 numjobs=1 00:19:26.303 00:19:26.303 verify_dump=1 00:19:26.303 verify_backlog=512 00:19:26.303 verify_state_save=0 00:19:26.303 do_verify=1 00:19:26.303 verify=crc32c-intel 00:19:26.303 [job0] 00:19:26.303 filename=/dev/nvme0n1 00:19:26.303 Could not set queue depth (nvme0n1) 00:19:26.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.303 fio-3.35 00:19:26.303 Starting 1 thread 00:19:27.678 00:19:27.678 job0: (groupid=0, jobs=1): err= 0: pid=83997: Fri Jul 12 00:38:32 2024 00:19:27.678 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:27.678 slat (nsec): min=13680, max=70244, avg=18667.78, stdev=6750.26 00:19:27.678 clat (usec): min=186, max=395, avg=230.90, stdev=26.16 00:19:27.678 lat (usec): min=201, max=410, avg=249.56, stdev=27.66 00:19:27.678 clat percentiles (usec): 00:19:27.678 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:19:27.678 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:19:27.678 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:19:27.678 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 371], 00:19:27.678 | 99.99th=[ 396] 00:19:27.678 write: IOPS=2496, BW=9986KiB/s (10.2MB/s)(9996KiB/1001msec); 0 zone resets 00:19:27.678 slat (usec): min=20, max=129, avg=27.89, stdev=10.27 00:19:27.678 clat (usec): min=125, max=652, avg=164.28, stdev=27.39 00:19:27.678 lat (usec): min=147, max=694, avg=192.17, stdev=31.25 00:19:27.678 clat percentiles (usec): 00:19:27.678 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:19:27.678 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:19:27.678 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 208], 00:19:27.678 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 510], 99.95th=[ 545], 00:19:27.678 | 99.99th=[ 652] 00:19:27.678 bw ( KiB/s): min=10080, max=10080, per=100.00%, avg=10080.00, stdev= 0.00, samples=1 00:19:27.678 iops : min= 2520, max= 2520, avg=2520.00, stdev= 0.00, samples=1 00:19:27.678 lat (usec) : 250=90.57%, 500=9.37%, 750=0.07% 00:19:27.678 cpu : usr=1.80%, sys=7.70%, ctx=4549, majf=0, minf=2 00:19:27.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.678 issued rwts: total=2048,2499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:27.678 00:19:27.678 Run status group 0 (all jobs): 00:19:27.678 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:19:27.678 WRITE: bw=9986KiB/s (10.2MB/s), 9986KiB/s-9986KiB/s (10.2MB/s-10.2MB/s), io=9996KiB (10.2MB), run=1001-1001msec 00:19:27.678 00:19:27.678 Disk stats (read/write): 00:19:27.678 nvme0n1: ios=2049/2048, merge=0/0, ticks=501/353, in_queue=854, util=91.59% 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.678 rmmod nvme_tcp 00:19:27.678 rmmod nvme_fabrics 00:19:27.678 rmmod nvme_keyring 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 83893 ']' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 83893 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 83893 ']' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 83893 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83893 00:19:27.678 killing process with pid 83893 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83893' 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 83893 00:19:27.678 00:38:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 83893 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.058 00:38:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.316 00:38:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:29.316 00:19:29.316 real 0m7.150s 00:19:29.316 user 0m22.465s 00:19:29.316 sys 0m1.476s 00:19:29.316 00:38:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.316 00:38:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.316 ************************************ 00:19:29.316 END TEST nvmf_nmic 00:19:29.316 ************************************ 00:19:29.316 00:38:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:29.316 00:38:34 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:29.316 00:38:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:29.316 00:38:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.316 00:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:29.316 ************************************ 00:19:29.316 START TEST nvmf_fio_target 00:19:29.316 ************************************ 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:29.316 * Looking for test storage... 00:19:29.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.316 00:38:34 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:29.317 Cannot find device "nvmf_tgt_br" 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.317 Cannot find device "nvmf_tgt_br2" 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:19:29.317 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:29.575 Cannot find device "nvmf_tgt_br" 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:29.575 Cannot find device "nvmf_tgt_br2" 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:29.575 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:29.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:29.833 00:19:29.833 --- 10.0.0.2 ping statistics --- 00:19:29.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.833 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:29.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:29.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:19:29.833 00:19:29.833 --- 10.0.0.3 ping statistics --- 00:19:29.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.833 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:29.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:19:29.833 00:19:29.833 --- 10.0.0.1 ping statistics --- 00:19:29.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.833 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=84192 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 84192 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 84192 ']' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.833 00:38:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.833 [2024-07-12 00:38:34.729122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:29.833 [2024-07-12 00:38:34.729344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.092 [2024-07-12 00:38:34.908651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.350 [2024-07-12 00:38:35.155125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.350 [2024-07-12 00:38:35.155183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.350 [2024-07-12 00:38:35.155200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.350 [2024-07-12 00:38:35.155213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.350 [2024-07-12 00:38:35.155225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.350 [2024-07-12 00:38:35.155494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.350 [2024-07-12 00:38:35.159489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.350 [2024-07-12 00:38:35.159587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.350 [2024-07-12 00:38:35.159601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.916 00:38:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:30.916 [2024-07-12 00:38:35.846911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.175 00:38:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:31.433 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:31.433 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:31.692 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:31.692 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:31.950 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:31.950 00:38:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:32.208 00:38:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:32.208 00:38:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:32.785 00:38:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.043 00:38:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:33.043 00:38:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.300 00:38:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:33.300 00:38:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:33.558 00:38:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:33.558 00:38:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:33.816 00:38:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:34.382 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:34.382 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:34.382 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:34.382 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:34.639 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.896 [2024-07-12 00:38:39.804329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.896 00:38:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:35.153 00:38:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:35.411 00:38:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:35.668 00:38:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:38.195 00:38:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:38.195 [global] 00:19:38.195 thread=1 00:19:38.195 invalidate=1 00:19:38.195 rw=write 00:19:38.195 time_based=1 00:19:38.195 runtime=1 00:19:38.195 ioengine=libaio 00:19:38.195 direct=1 00:19:38.195 bs=4096 00:19:38.195 iodepth=1 00:19:38.195 norandommap=0 00:19:38.195 numjobs=1 00:19:38.195 00:19:38.195 verify_dump=1 00:19:38.195 verify_backlog=512 00:19:38.195 verify_state_save=0 00:19:38.195 do_verify=1 00:19:38.195 verify=crc32c-intel 00:19:38.195 [job0] 00:19:38.195 filename=/dev/nvme0n1 00:19:38.195 [job1] 00:19:38.195 filename=/dev/nvme0n2 00:19:38.195 [job2] 00:19:38.195 filename=/dev/nvme0n3 00:19:38.195 [job3] 00:19:38.195 filename=/dev/nvme0n4 00:19:38.195 Could not set queue depth (nvme0n1) 00:19:38.195 Could not set queue depth (nvme0n2) 00:19:38.195 Could not set queue depth (nvme0n3) 00:19:38.195 Could not set queue depth (nvme0n4) 00:19:38.195 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.195 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.195 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.195 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.195 fio-3.35 00:19:38.195 Starting 4 threads 00:19:39.128 00:19:39.128 job0: (groupid=0, jobs=1): err= 0: pid=84492: Fri Jul 12 00:38:43 2024 00:19:39.128 read: IOPS=1402, BW=5610KiB/s (5745kB/s)(5616KiB/1001msec) 00:19:39.128 slat (nsec): min=9627, max=64275, avg=18158.54, stdev=5253.73 00:19:39.128 clat (usec): min=216, max=701, avg=355.04, stdev=20.10 00:19:39.128 lat (usec): min=230, max=724, avg=373.20, stdev=20.68 00:19:39.128 clat percentiles (usec): 00:19:39.128 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 343], 00:19:39.128 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:19:39.128 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:19:39.128 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 519], 99.95th=[ 701], 00:19:39.128 | 99.99th=[ 701] 00:19:39.128 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:39.128 slat (nsec): min=11323, max=78482, avg=24743.46, stdev=7863.25 00:19:39.128 clat (usec): min=122, max=431, avg=281.31, stdev=27.78 00:19:39.128 lat (usec): min=161, max=449, avg=306.05, stdev=26.39 00:19:39.128 clat percentiles (usec): 00:19:39.128 | 1.00th=[ 157], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 269], 00:19:39.128 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:39.128 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:19:39.128 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 433], 00:19:39.128 | 99.99th=[ 433] 00:19:39.128 bw ( KiB/s): min= 8008, max= 8008, per=32.62%, avg=8008.00, stdev= 0.00, samples=1 00:19:39.128 iops : min= 2002, max= 2002, avg=2002.00, stdev= 0.00, samples=1 00:19:39.128 lat (usec) : 250=2.01%, 500=97.93%, 750=0.07% 00:19:39.128 cpu : usr=1.00%, sys=5.20%, ctx=2941, majf=0, minf=6 00:19:39.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.128 issued rwts: total=1404,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.129 job1: (groupid=0, jobs=1): err= 0: pid=84493: Fri Jul 12 00:38:43 2024 00:19:39.129 read: IOPS=1280, BW=5123KiB/s (5246kB/s)(5128KiB/1001msec) 00:19:39.129 slat (usec): min=11, max=101, avg=16.39, stdev= 4.67 00:19:39.129 clat (usec): min=204, max=637, avg=352.66, stdev=21.34 00:19:39.129 lat (usec): min=257, max=651, avg=369.05, stdev=21.20 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 334], 20.00th=[ 338], 00:19:39.129 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:19:39.129 | 70.00th=[ 359], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 375], 00:19:39.129 | 99.00th=[ 400], 99.50th=[ 482], 99.90th=[ 603], 99.95th=[ 635], 00:19:39.129 | 99.99th=[ 635] 00:19:39.129 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:39.129 slat (usec): min=11, max=117, avg=30.43, stdev=12.50 00:19:39.129 clat (usec): min=136, max=524, avg=308.53, stdev=69.42 00:19:39.129 lat (usec): min=158, max=587, avg=338.95, stdev=78.88 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 161], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 269], 00:19:39.129 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:19:39.129 | 70.00th=[ 297], 80.00th=[ 355], 90.00th=[ 449], 95.00th=[ 465], 00:19:39.129 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 529], 00:19:39.129 | 99.99th=[ 529] 00:19:39.129 bw ( KiB/s): min= 6696, max= 6696, per=27.27%, avg=6696.00, stdev= 0.00, samples=1 00:19:39.129 iops : min= 1674, max= 1674, avg=1674.00, stdev= 0.00, samples=1 00:19:39.129 lat (usec) : 250=2.48%, 500=96.95%, 750=0.57% 00:19:39.129 cpu : usr=2.10%, sys=4.60%, ctx=2821, majf=0, minf=7 00:19:39.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 issued rwts: total=1282,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.129 job2: (groupid=0, jobs=1): err= 0: pid=84494: Fri Jul 12 00:38:43 2024 00:19:39.129 read: IOPS=1402, BW=5610KiB/s (5745kB/s)(5616KiB/1001msec) 00:19:39.129 slat (usec): min=9, max=452, avg=17.05, stdev=12.67 00:19:39.129 clat (usec): min=229, max=503, avg=355.92, stdev=17.69 00:19:39.129 lat (usec): min=241, max=781, avg=372.98, stdev=21.42 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:19:39.129 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:19:39.129 | 70.00th=[ 363], 80.00th=[ 367], 90.00th=[ 375], 95.00th=[ 383], 00:19:39.129 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 502], 00:19:39.129 | 99.99th=[ 502] 00:19:39.129 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:39.129 slat (usec): min=11, max=118, avg=24.89, stdev= 8.40 00:19:39.129 clat (usec): min=156, max=428, avg=281.25, stdev=24.61 00:19:39.129 lat (usec): min=177, max=477, avg=306.14, stdev=23.27 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 180], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 269], 00:19:39.129 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:19:39.129 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:19:39.129 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 416], 99.95th=[ 429], 00:19:39.129 | 99.99th=[ 429] 00:19:39.129 bw ( KiB/s): min= 8016, max= 8016, per=32.65%, avg=8016.00, stdev= 0.00, samples=1 00:19:39.129 iops : min= 2004, max= 2004, avg=2004.00, stdev= 0.00, samples=1 00:19:39.129 lat (usec) : 250=2.01%, 500=97.96%, 750=0.03% 00:19:39.129 cpu : usr=1.10%, sys=5.30%, ctx=2942, majf=0, minf=11 00:19:39.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 issued rwts: total=1404,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.129 job3: (groupid=0, jobs=1): err= 0: pid=84495: Fri Jul 12 00:38:43 2024 00:19:39.129 read: IOPS=1280, BW=5123KiB/s (5246kB/s)(5128KiB/1001msec) 00:19:39.129 slat (nsec): min=11690, max=44619, avg=17372.60, stdev=4223.55 00:19:39.129 clat (usec): min=241, max=796, avg=351.89, stdev=24.63 00:19:39.129 lat (usec): min=253, max=810, avg=369.26, stdev=24.72 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:19:39.129 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:19:39.129 | 70.00th=[ 359], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 375], 00:19:39.129 | 99.00th=[ 396], 99.50th=[ 478], 99.90th=[ 676], 99.95th=[ 799], 00:19:39.129 | 99.99th=[ 799] 00:19:39.129 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:39.129 slat (nsec): min=11286, max=94755, avg=30580.54, stdev=12772.47 00:19:39.129 clat (usec): min=154, max=524, avg=308.33, stdev=67.75 00:19:39.129 lat (usec): min=175, max=576, avg=338.91, stdev=77.87 00:19:39.129 clat percentiles (usec): 00:19:39.129 | 1.00th=[ 196], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 269], 00:19:39.129 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:19:39.129 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 445], 95.00th=[ 465], 00:19:39.129 | 99.00th=[ 490], 99.50th=[ 498], 99.90th=[ 523], 99.95th=[ 523], 00:19:39.129 | 99.99th=[ 523] 00:19:39.129 bw ( KiB/s): min= 6688, max= 6688, per=27.24%, avg=6688.00, stdev= 0.00, samples=1 00:19:39.129 iops : min= 1672, max= 1672, avg=1672.00, stdev= 0.00, samples=1 00:19:39.129 lat (usec) : 250=2.48%, 500=97.16%, 750=0.32%, 1000=0.04% 00:19:39.129 cpu : usr=1.80%, sys=4.80%, ctx=2819, majf=0, minf=11 00:19:39.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.129 issued rwts: total=1282,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.129 00:19:39.129 Run status group 0 (all jobs): 00:19:39.129 READ: bw=21.0MiB/s (22.0MB/s), 5123KiB/s-5610KiB/s (5246kB/s-5745kB/s), io=21.0MiB (22.0MB), run=1001-1001msec 00:19:39.129 WRITE: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:19:39.129 00:19:39.129 Disk stats (read/write): 00:19:39.129 nvme0n1: ios=1074/1533, merge=0/0, ticks=395/434, in_queue=829, util=87.68% 00:19:39.129 nvme0n2: ios=1068/1384, merge=0/0, ticks=416/447, in_queue=863, util=89.66% 00:19:39.129 nvme0n3: ios=1024/1534, merge=0/0, ticks=354/442, in_queue=796, util=89.13% 00:19:39.129 nvme0n4: ios=1024/1384, merge=0/0, ticks=365/449, in_queue=814, util=89.78% 00:19:39.129 00:38:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:39.129 [global] 00:19:39.129 thread=1 00:19:39.129 invalidate=1 00:19:39.129 rw=randwrite 00:19:39.129 time_based=1 00:19:39.129 runtime=1 00:19:39.129 ioengine=libaio 00:19:39.129 direct=1 00:19:39.129 bs=4096 00:19:39.129 iodepth=1 00:19:39.129 norandommap=0 00:19:39.129 numjobs=1 00:19:39.129 00:19:39.129 verify_dump=1 00:19:39.129 verify_backlog=512 00:19:39.129 verify_state_save=0 00:19:39.129 do_verify=1 00:19:39.129 verify=crc32c-intel 00:19:39.129 [job0] 00:19:39.129 filename=/dev/nvme0n1 00:19:39.129 [job1] 00:19:39.129 filename=/dev/nvme0n2 00:19:39.129 [job2] 00:19:39.129 filename=/dev/nvme0n3 00:19:39.129 [job3] 00:19:39.129 filename=/dev/nvme0n4 00:19:39.129 Could not set queue depth (nvme0n1) 00:19:39.129 Could not set queue depth (nvme0n2) 00:19:39.129 Could not set queue depth (nvme0n3) 00:19:39.129 Could not set queue depth (nvme0n4) 00:19:39.129 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.129 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.129 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.129 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:39.129 fio-3.35 00:19:39.129 Starting 4 threads 00:19:40.503 00:19:40.503 job0: (groupid=0, jobs=1): err= 0: pid=84549: Fri Jul 12 00:38:45 2024 00:19:40.503 read: IOPS=1189, BW=4759KiB/s (4873kB/s)(4764KiB/1001msec) 00:19:40.503 slat (usec): min=9, max=136, avg=27.62, stdev= 7.76 00:19:40.503 clat (usec): min=195, max=2635, avg=381.55, stdev=91.56 00:19:40.503 lat (usec): min=226, max=2666, avg=409.18, stdev=90.71 00:19:40.503 clat percentiles (usec): 00:19:40.503 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:19:40.503 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:19:40.503 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 465], 95.00th=[ 519], 00:19:40.503 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 848], 99.95th=[ 2638], 00:19:40.503 | 99.99th=[ 2638] 00:19:40.503 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:40.503 slat (usec): min=12, max=127, avg=41.05, stdev= 9.23 00:19:40.503 clat (usec): min=144, max=17802, avg=287.17, stdev=449.20 00:19:40.503 lat (usec): min=177, max=17839, avg=328.22, stdev=449.05 00:19:40.503 clat percentiles (usec): 00:19:40.503 | 1.00th=[ 167], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 260], 00:19:40.504 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:19:40.504 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:19:40.504 | 99.00th=[ 404], 99.50th=[ 469], 99.90th=[ 1352], 99.95th=[17695], 00:19:40.504 | 99.99th=[17695] 00:19:40.504 bw ( KiB/s): min= 7416, max= 7416, per=22.65%, avg=7416.00, stdev= 0.00, samples=1 00:19:40.504 iops : min= 1854, max= 1854, avg=1854.00, stdev= 0.00, samples=1 00:19:40.504 lat (usec) : 250=5.61%, 500=91.82%, 750=2.38%, 1000=0.07% 00:19:40.504 lat (msec) : 2=0.04%, 4=0.04%, 20=0.04% 00:19:40.504 cpu : usr=2.10%, sys=7.00%, ctx=2729, majf=0, minf=14 00:19:40.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 issued rwts: total=1191,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.504 job1: (groupid=0, jobs=1): err= 0: pid=84550: Fri Jul 12 00:38:45 2024 00:19:40.504 read: IOPS=2135, BW=8543KiB/s (8748kB/s)(8552KiB/1001msec) 00:19:40.504 slat (nsec): min=9447, max=39465, avg=16859.11, stdev=2934.49 00:19:40.504 clat (usec): min=177, max=3826, avg=211.86, stdev=101.84 00:19:40.504 lat (usec): min=192, max=3853, avg=228.72, stdev=101.94 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:19:40.504 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:19:40.504 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 229], 00:19:40.504 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 1012], 99.95th=[ 1303], 00:19:40.504 | 99.99th=[ 3818] 00:19:40.504 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:40.504 slat (usec): min=16, max=153, avg=25.45, stdev= 7.16 00:19:40.504 clat (usec): min=60, max=7753, avg=170.60, stdev=196.61 00:19:40.504 lat (usec): min=151, max=7775, avg=196.05, stdev=197.14 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 145], 00:19:40.504 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:19:40.504 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 215], 00:19:40.504 | 99.00th=[ 461], 99.50th=[ 506], 99.90th=[ 3392], 99.95th=[ 4080], 00:19:40.504 | 99.99th=[ 7767] 00:19:40.504 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:19:40.504 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:40.504 lat (usec) : 100=0.02%, 250=95.74%, 500=3.49%, 750=0.51%, 1000=0.04% 00:19:40.504 lat (msec) : 2=0.09%, 4=0.06%, 10=0.04% 00:19:40.504 cpu : usr=1.80%, sys=7.70%, ctx=4705, majf=0, minf=11 00:19:40.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 issued rwts: total=2138,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.504 job2: (groupid=0, jobs=1): err= 0: pid=84551: Fri Jul 12 00:38:45 2024 00:19:40.504 read: IOPS=2078, BW=8316KiB/s (8515kB/s)(8324KiB/1001msec) 00:19:40.504 slat (nsec): min=13024, max=88387, avg=17963.75, stdev=5290.25 00:19:40.504 clat (usec): min=197, max=1934, avg=219.46, stdev=39.79 00:19:40.504 lat (usec): min=210, max=1948, avg=237.42, stdev=40.46 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 208], 00:19:40.504 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:19:40.504 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 233], 95.00th=[ 239], 00:19:40.504 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 359], 99.95th=[ 359], 00:19:40.504 | 99.99th=[ 1942] 00:19:40.504 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:40.504 slat (nsec): min=19450, max=92133, avg=25141.74, stdev=6168.26 00:19:40.504 clat (usec): min=136, max=330, avg=168.94, stdev=11.32 00:19:40.504 lat (usec): min=168, max=422, avg=194.08, stdev=13.90 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:19:40.504 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:19:40.504 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:19:40.504 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 243], 99.95th=[ 243], 00:19:40.504 | 99.99th=[ 330] 00:19:40.504 bw ( KiB/s): min=10272, max=10272, per=31.38%, avg=10272.00, stdev= 0.00, samples=1 00:19:40.504 iops : min= 2568, max= 2568, avg=2568.00, stdev= 0.00, samples=1 00:19:40.504 lat (usec) : 250=99.16%, 500=0.82% 00:19:40.504 lat (msec) : 2=0.02% 00:19:40.504 cpu : usr=1.10%, sys=8.30%, ctx=4641, majf=0, minf=5 00:19:40.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.504 job3: (groupid=0, jobs=1): err= 0: pid=84552: Fri Jul 12 00:38:45 2024 00:19:40.504 read: IOPS=1251, BW=5007KiB/s (5127kB/s)(5012KiB/1001msec) 00:19:40.504 slat (nsec): min=13640, max=61937, avg=22687.98, stdev=5990.56 00:19:40.504 clat (usec): min=190, max=877, avg=382.30, stdev=73.56 00:19:40.504 lat (usec): min=206, max=892, avg=404.99, stdev=75.42 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 208], 5.00th=[ 273], 10.00th=[ 343], 20.00th=[ 351], 00:19:40.504 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:19:40.504 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 465], 95.00th=[ 537], 00:19:40.504 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 832], 99.95th=[ 881], 00:19:40.504 | 99.99th=[ 881] 00:19:40.504 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:40.504 slat (usec): min=22, max=102, avg=42.30, stdev= 8.54 00:19:40.504 clat (usec): min=155, max=1002, avg=273.42, stdev=38.10 00:19:40.504 lat (usec): min=193, max=1040, avg=315.72, stdev=37.32 00:19:40.504 clat percentiles (usec): 00:19:40.504 | 1.00th=[ 178], 5.00th=[ 235], 10.00th=[ 249], 20.00th=[ 258], 00:19:40.504 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:19:40.504 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:19:40.504 | 99.00th=[ 396], 99.50th=[ 461], 99.90th=[ 668], 99.95th=[ 1004], 00:19:40.504 | 99.99th=[ 1004] 00:19:40.504 bw ( KiB/s): min= 7504, max= 7504, per=22.92%, avg=7504.00, stdev= 0.00, samples=1 00:19:40.504 iops : min= 1876, max= 1876, avg=1876.00, stdev= 0.00, samples=1 00:19:40.504 lat (usec) : 250=8.32%, 500=88.56%, 750=2.98%, 1000=0.11% 00:19:40.504 lat (msec) : 2=0.04% 00:19:40.504 cpu : usr=1.70%, sys=6.90%, ctx=2806, majf=0, minf=15 00:19:40.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:40.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.504 issued rwts: total=1253,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:40.504 00:19:40.504 Run status group 0 (all jobs): 00:19:40.504 READ: bw=26.0MiB/s (27.3MB/s), 4759KiB/s-8543KiB/s (4873kB/s-8748kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:19:40.504 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:19:40.504 00:19:40.504 Disk stats (read/write): 00:19:40.504 nvme0n1: ios=1074/1456, merge=0/0, ticks=412/434, in_queue=846, util=88.48% 00:19:40.504 nvme0n2: ios=2097/2268, merge=0/0, ticks=472/368, in_queue=840, util=89.29% 00:19:40.504 nvme0n3: ios=1966/2048, merge=0/0, ticks=503/372, in_queue=875, util=90.14% 00:19:40.504 nvme0n4: ios=1024/1460, merge=0/0, ticks=387/432, in_queue=819, util=89.78% 00:19:40.504 00:38:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:40.504 [global] 00:19:40.504 thread=1 00:19:40.504 invalidate=1 00:19:40.504 rw=write 00:19:40.504 time_based=1 00:19:40.504 runtime=1 00:19:40.504 ioengine=libaio 00:19:40.504 direct=1 00:19:40.504 bs=4096 00:19:40.504 iodepth=128 00:19:40.504 norandommap=0 00:19:40.504 numjobs=1 00:19:40.504 00:19:40.504 verify_dump=1 00:19:40.504 verify_backlog=512 00:19:40.504 verify_state_save=0 00:19:40.504 do_verify=1 00:19:40.504 verify=crc32c-intel 00:19:40.504 [job0] 00:19:40.504 filename=/dev/nvme0n1 00:19:40.504 [job1] 00:19:40.504 filename=/dev/nvme0n2 00:19:40.504 [job2] 00:19:40.504 filename=/dev/nvme0n3 00:19:40.504 [job3] 00:19:40.504 filename=/dev/nvme0n4 00:19:40.504 Could not set queue depth (nvme0n1) 00:19:40.504 Could not set queue depth (nvme0n2) 00:19:40.504 Could not set queue depth (nvme0n3) 00:19:40.504 Could not set queue depth (nvme0n4) 00:19:40.504 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.504 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.504 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.504 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:40.504 fio-3.35 00:19:40.504 Starting 4 threads 00:19:41.881 00:19:41.881 job0: (groupid=0, jobs=1): err= 0: pid=84607: Fri Jul 12 00:38:46 2024 00:19:41.881 read: IOPS=5003, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1001msec) 00:19:41.881 slat (usec): min=9, max=3033, avg=95.15, stdev=444.79 00:19:41.881 clat (usec): min=385, max=15536, avg=12486.25, stdev=1163.90 00:19:41.881 lat (usec): min=2932, max=15548, avg=12581.40, stdev=1085.47 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[ 6456], 5.00th=[10421], 10.00th=[11863], 20.00th=[12387], 00:19:41.881 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:41.881 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13566], 00:19:41.881 | 99.00th=[13960], 99.50th=[14484], 99.90th=[14746], 99.95th=[15533], 00:19:41.881 | 99.99th=[15533] 00:19:41.881 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:41.881 slat (usec): min=9, max=3844, avg=94.55, stdev=399.88 00:19:41.881 clat (usec): min=9496, max=15997, avg=12482.46, stdev=1277.10 00:19:41.881 lat (usec): min=9854, max=16022, avg=12577.01, stdev=1277.19 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[10290], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:19:41.881 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12780], 60.00th=[13042], 00:19:41.881 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:19:41.881 | 99.00th=[14877], 99.50th=[15533], 99.90th=[15926], 99.95th=[15926], 00:19:41.881 | 99.99th=[16057] 00:19:41.881 bw ( KiB/s): min=20480, max=20480, per=34.35%, avg=20480.00, stdev= 0.00, samples=1 00:19:41.881 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:41.881 lat (usec) : 500=0.01% 00:19:41.881 lat (msec) : 4=0.32%, 10=1.17%, 20=98.50% 00:19:41.881 cpu : usr=4.90%, sys=13.50%, ctx=500, majf=0, minf=13 00:19:41.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:41.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:41.881 issued rwts: total=5009,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:41.881 job1: (groupid=0, jobs=1): err= 0: pid=84608: Fri Jul 12 00:38:46 2024 00:19:41.881 read: IOPS=5041, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec) 00:19:41.881 slat (usec): min=6, max=3144, avg=95.72, stdev=444.80 00:19:41.881 clat (usec): min=388, max=15184, avg=12559.31, stdev=1207.25 00:19:41.881 lat (usec): min=2687, max=17149, avg=12655.03, stdev=1143.15 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[ 6259], 5.00th=[10552], 10.00th=[11600], 20.00th=[12518], 00:19:41.881 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:19:41.881 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13698], 00:19:41.881 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15139], 00:19:41.881 | 99.99th=[15139] 00:19:41.881 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:41.881 slat (usec): min=8, max=3104, avg=93.64, stdev=396.87 00:19:41.881 clat (usec): min=9488, max=15343, avg=12326.07, stdev=1209.85 00:19:41.881 lat (usec): min=9778, max=15369, avg=12419.71, stdev=1209.99 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[10159], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:19:41.881 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12256], 60.00th=[12780], 00:19:41.881 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13960], 95.00th=[14091], 00:19:41.881 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15270], 99.95th=[15270], 00:19:41.881 | 99.99th=[15401] 00:19:41.881 bw ( KiB/s): min=20439, max=20480, per=34.31%, avg=20459.50, stdev=28.99, samples=2 00:19:41.881 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:19:41.881 lat (usec) : 500=0.01% 00:19:41.881 lat (msec) : 4=0.31%, 10=1.17%, 20=98.51% 00:19:41.881 cpu : usr=4.79%, sys=13.27%, ctx=515, majf=0, minf=7 00:19:41.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:41.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:41.881 issued rwts: total=5057,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:41.881 job2: (groupid=0, jobs=1): err= 0: pid=84609: Fri Jul 12 00:38:46 2024 00:19:41.881 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:19:41.881 slat (usec): min=4, max=9516, avg=171.53, stdev=900.14 00:19:41.881 clat (usec): min=13178, max=43257, avg=21435.69, stdev=4235.04 00:19:41.881 lat (usec): min=13202, max=49656, avg=21607.22, stdev=4318.12 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[14615], 5.00th=[16581], 10.00th=[17695], 20.00th=[17957], 00:19:41.881 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:19:41.881 | 70.00th=[21890], 80.00th=[26084], 90.00th=[27657], 95.00th=[28967], 00:19:41.881 | 99.00th=[32900], 99.50th=[33817], 99.90th=[43254], 99.95th=[43254], 00:19:41.881 | 99.99th=[43254] 00:19:41.881 write: IOPS=2695, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1005msec); 0 zone resets 00:19:41.881 slat (usec): min=4, max=10601, avg=198.81, stdev=901.22 00:19:41.881 clat (usec): min=4410, max=49790, avg=26481.55, stdev=8989.07 00:19:41.881 lat (usec): min=4450, max=49806, avg=26680.36, stdev=9040.91 00:19:41.881 clat percentiles (usec): 00:19:41.881 | 1.00th=[ 8848], 5.00th=[15926], 10.00th=[16909], 20.00th=[19006], 00:19:41.881 | 30.00th=[20317], 40.00th=[20579], 50.00th=[25560], 60.00th=[27395], 00:19:41.881 | 70.00th=[31589], 80.00th=[36963], 90.00th=[41157], 95.00th=[41157], 00:19:41.881 | 99.00th=[44303], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:19:41.881 | 99.99th=[49546] 00:19:41.881 bw ( KiB/s): min= 8351, max=12312, per=17.33%, avg=10331.50, stdev=2800.85, samples=2 00:19:41.881 iops : min= 2087, max= 3078, avg=2582.50, stdev=700.74, samples=2 00:19:41.881 lat (msec) : 10=0.61%, 20=34.41%, 50=64.98% 00:19:41.881 cpu : usr=2.19%, sys=8.47%, ctx=353, majf=0, minf=11 00:19:41.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:41.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:41.881 issued rwts: total=2560,2709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:41.882 job3: (groupid=0, jobs=1): err= 0: pid=84610: Fri Jul 12 00:38:46 2024 00:19:41.882 read: IOPS=2026, BW=8107KiB/s (8302kB/s)(8156KiB/1006msec) 00:19:41.882 slat (usec): min=4, max=21109, avg=282.19, stdev=1528.39 00:19:41.882 clat (usec): min=1698, max=66296, avg=34981.56, stdev=12047.25 00:19:41.882 lat (usec): min=7110, max=66327, avg=35263.75, stdev=12054.72 00:19:41.882 clat percentiles (usec): 00:19:41.882 | 1.00th=[ 9896], 5.00th=[21627], 10.00th=[23200], 20.00th=[25560], 00:19:41.882 | 30.00th=[26870], 40.00th=[28181], 50.00th=[30278], 60.00th=[35914], 00:19:41.882 | 70.00th=[40109], 80.00th=[42730], 90.00th=[55313], 95.00th=[57934], 00:19:41.882 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:19:41.882 | 99.99th=[66323] 00:19:41.882 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:19:41.882 slat (usec): min=9, max=15457, avg=199.76, stdev=1050.86 00:19:41.882 clat (usec): min=15550, max=59107, avg=26872.66, stdev=9028.01 00:19:41.882 lat (usec): min=19671, max=59152, avg=27072.43, stdev=9016.77 00:19:41.882 clat percentiles (usec): 00:19:41.882 | 1.00th=[16581], 5.00th=[19792], 10.00th=[20317], 20.00th=[20317], 00:19:41.882 | 30.00th=[20579], 40.00th=[20841], 50.00th=[23462], 60.00th=[26346], 00:19:41.882 | 70.00th=[27919], 80.00th=[33424], 90.00th=[40633], 95.00th=[46400], 00:19:41.882 | 99.00th=[58983], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:19:41.882 | 99.99th=[58983] 00:19:41.882 bw ( KiB/s): min= 8175, max= 8192, per=13.72%, avg=8183.50, stdev=12.02, samples=2 00:19:41.882 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:19:41.882 lat (msec) : 2=0.02%, 10=0.71%, 20=4.92%, 50=86.00%, 100=8.34% 00:19:41.882 cpu : usr=2.49%, sys=6.07%, ctx=276, majf=0, minf=12 00:19:41.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:41.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:41.882 issued rwts: total=2039,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:41.882 00:19:41.882 Run status group 0 (all jobs): 00:19:41.882 READ: bw=56.9MiB/s (59.7MB/s), 8107KiB/s-19.7MiB/s (8302kB/s-20.7MB/s), io=57.3MiB (60.1MB), run=1001-1006msec 00:19:41.882 WRITE: bw=58.2MiB/s (61.1MB/s), 8143KiB/s-20.0MiB/s (8339kB/s-20.9MB/s), io=58.6MiB (61.4MB), run=1001-1006msec 00:19:41.882 00:19:41.882 Disk stats (read/write): 00:19:41.882 nvme0n1: ios=4146/4398, merge=0/0, ticks=11458/11862, in_queue=23320, util=85.96% 00:19:41.882 nvme0n2: ios=4125/4463, merge=0/0, ticks=11972/11961, in_queue=23933, util=88.13% 00:19:41.882 nvme0n3: ios=2048/2438, merge=0/0, ticks=19650/28414, in_queue=48064, util=88.29% 00:19:41.882 nvme0n4: ios=1536/1797, merge=0/0, ticks=14634/10626, in_queue=25260, util=89.60% 00:19:41.882 00:38:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:41.882 [global] 00:19:41.882 thread=1 00:19:41.882 invalidate=1 00:19:41.882 rw=randwrite 00:19:41.882 time_based=1 00:19:41.882 runtime=1 00:19:41.882 ioengine=libaio 00:19:41.882 direct=1 00:19:41.882 bs=4096 00:19:41.882 iodepth=128 00:19:41.882 norandommap=0 00:19:41.882 numjobs=1 00:19:41.882 00:19:41.882 verify_dump=1 00:19:41.882 verify_backlog=512 00:19:41.882 verify_state_save=0 00:19:41.882 do_verify=1 00:19:41.882 verify=crc32c-intel 00:19:41.882 [job0] 00:19:41.882 filename=/dev/nvme0n1 00:19:41.882 [job1] 00:19:41.882 filename=/dev/nvme0n2 00:19:41.882 [job2] 00:19:41.882 filename=/dev/nvme0n3 00:19:41.882 [job3] 00:19:41.882 filename=/dev/nvme0n4 00:19:41.882 Could not set queue depth (nvme0n1) 00:19:41.882 Could not set queue depth (nvme0n2) 00:19:41.882 Could not set queue depth (nvme0n3) 00:19:41.882 Could not set queue depth (nvme0n4) 00:19:41.882 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:41.882 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:41.882 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:41.882 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:41.882 fio-3.35 00:19:41.882 Starting 4 threads 00:19:43.258 00:19:43.258 job0: (groupid=0, jobs=1): err= 0: pid=84669: Fri Jul 12 00:38:47 2024 00:19:43.258 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:19:43.258 slat (usec): min=6, max=28167, avg=180.08, stdev=1243.99 00:19:43.258 clat (usec): min=6123, max=71550, avg=22716.85, stdev=9530.23 00:19:43.258 lat (usec): min=6136, max=71566, avg=22896.94, stdev=9621.50 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[11469], 20.00th=[17171], 00:19:43.258 | 30.00th=[19530], 40.00th=[20841], 50.00th=[21627], 60.00th=[22414], 00:19:43.258 | 70.00th=[23987], 80.00th=[27657], 90.00th=[31327], 95.00th=[33424], 00:19:43.258 | 99.00th=[65799], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:19:43.258 | 99.99th=[71828] 00:19:43.258 write: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1012msec); 0 zone resets 00:19:43.258 slat (usec): min=5, max=17384, avg=169.14, stdev=945.65 00:19:43.258 clat (usec): min=4993, max=75413, avg=22927.30, stdev=10523.24 00:19:43.258 lat (usec): min=5016, max=75424, avg=23096.44, stdev=10598.95 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[ 9503], 5.00th=[13435], 10.00th=[14222], 20.00th=[16581], 00:19:43.258 | 30.00th=[17957], 40.00th=[20317], 50.00th=[21365], 60.00th=[21627], 00:19:43.258 | 70.00th=[23462], 80.00th=[24249], 90.00th=[31065], 95.00th=[46924], 00:19:43.258 | 99.00th=[68682], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:19:43.258 | 99.99th=[74974] 00:19:43.258 bw ( KiB/s): min=11528, max=11888, per=25.96%, avg=11708.00, stdev=254.56, samples=2 00:19:43.258 iops : min= 2882, max= 2972, avg=2927.00, stdev=63.64, samples=2 00:19:43.258 lat (msec) : 10=1.96%, 20=34.87%, 50=59.22%, 100=3.95% 00:19:43.258 cpu : usr=3.17%, sys=7.42%, ctx=436, majf=0, minf=3 00:19:43.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:43.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.258 issued rwts: total=2560,3055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.258 job1: (groupid=0, jobs=1): err= 0: pid=84670: Fri Jul 12 00:38:47 2024 00:19:43.258 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:19:43.258 slat (usec): min=2, max=16074, avg=233.03, stdev=1179.05 00:19:43.258 clat (usec): min=13053, max=59615, avg=29401.67, stdev=8680.89 00:19:43.258 lat (usec): min=13064, max=59630, avg=29634.69, stdev=8761.58 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[17695], 5.00th=[19530], 10.00th=[21103], 20.00th=[22414], 00:19:43.258 | 30.00th=[23462], 40.00th=[24511], 50.00th=[26346], 60.00th=[27657], 00:19:43.258 | 70.00th=[31851], 80.00th=[37487], 90.00th=[43254], 95.00th=[46400], 00:19:43.258 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[58983], 00:19:43.258 | 99.99th=[59507] 00:19:43.258 write: IOPS=2055, BW=8222KiB/s (8419kB/s)(8312KiB/1011msec); 0 zone resets 00:19:43.258 slat (usec): min=3, max=19704, avg=246.24, stdev=1360.18 00:19:43.258 clat (usec): min=5591, max=86476, avg=32604.07, stdev=16221.70 00:19:43.258 lat (usec): min=11108, max=88011, avg=32850.32, stdev=16360.65 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[11207], 5.00th=[14484], 10.00th=[17171], 20.00th=[19792], 00:19:43.258 | 30.00th=[21365], 40.00th=[24511], 50.00th=[30016], 60.00th=[32375], 00:19:43.258 | 70.00th=[40109], 80.00th=[44303], 90.00th=[47973], 95.00th=[74974], 00:19:43.258 | 99.00th=[82314], 99.50th=[82314], 99.90th=[83362], 99.95th=[85459], 00:19:43.258 | 99.99th=[86508] 00:19:43.258 bw ( KiB/s): min= 8192, max= 8192, per=18.17%, avg=8192.00, stdev= 0.00, samples=2 00:19:43.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:19:43.258 lat (msec) : 10=0.02%, 20=12.85%, 50=82.19%, 100=4.94% 00:19:43.258 cpu : usr=2.57%, sys=4.95%, ctx=571, majf=0, minf=12 00:19:43.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:43.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.258 issued rwts: total=2048,2078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.258 job2: (groupid=0, jobs=1): err= 0: pid=84671: Fri Jul 12 00:38:47 2024 00:19:43.258 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:19:43.258 slat (usec): min=4, max=14732, avg=127.23, stdev=872.12 00:19:43.258 clat (usec): min=4573, max=40876, avg=16398.55, stdev=6486.59 00:19:43.258 lat (usec): min=4586, max=40899, avg=16525.78, stdev=6555.90 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[ 5932], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11469], 00:19:43.258 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13304], 60.00th=[15008], 00:19:43.258 | 70.00th=[19268], 80.00th=[23987], 90.00th=[27919], 95.00th=[28705], 00:19:43.258 | 99.00th=[29754], 99.50th=[29754], 99.90th=[37487], 99.95th=[39060], 00:19:43.258 | 99.99th=[40633] 00:19:43.258 write: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(16.6MiB/1013msec); 0 zone resets 00:19:43.258 slat (usec): min=5, max=10492, avg=105.56, stdev=550.75 00:19:43.258 clat (usec): min=4040, max=36869, avg=14412.12, stdev=5686.11 00:19:43.258 lat (usec): min=4063, max=36907, avg=14517.68, stdev=5736.87 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 8717], 20.00th=[11338], 00:19:43.258 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13304], 00:19:43.258 | 70.00th=[13435], 80.00th=[18744], 90.00th=[25297], 95.00th=[26608], 00:19:43.258 | 99.00th=[28705], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:19:43.258 | 99.99th=[36963] 00:19:43.258 bw ( KiB/s): min=12416, max=20480, per=36.48%, avg=16448.00, stdev=5702.11, samples=2 00:19:43.258 iops : min= 3104, max= 5120, avg=4112.00, stdev=1425.53, samples=2 00:19:43.258 lat (msec) : 10=10.47%, 20=66.24%, 50=23.29% 00:19:43.258 cpu : usr=4.25%, sys=10.28%, ctx=644, majf=0, minf=13 00:19:43.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:43.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.258 issued rwts: total=4096,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.258 job3: (groupid=0, jobs=1): err= 0: pid=84672: Fri Jul 12 00:38:47 2024 00:19:43.258 read: IOPS=1797, BW=7190KiB/s (7363kB/s)(7248KiB/1008msec) 00:19:43.258 slat (usec): min=4, max=20436, avg=250.64, stdev=1432.35 00:19:43.258 clat (usec): min=7460, max=62622, avg=30701.13, stdev=8251.05 00:19:43.258 lat (usec): min=7472, max=62653, avg=30951.76, stdev=8381.27 00:19:43.258 clat percentiles (usec): 00:19:43.258 | 1.00th=[ 7832], 5.00th=[20579], 10.00th=[21890], 20.00th=[24249], 00:19:43.258 | 30.00th=[26346], 40.00th=[27132], 50.00th=[28181], 60.00th=[30802], 00:19:43.258 | 70.00th=[33424], 80.00th=[39060], 90.00th=[43254], 95.00th=[45876], 00:19:43.258 | 99.00th=[50070], 99.50th=[50070], 99.90th=[60556], 99.95th=[62653], 00:19:43.258 | 99.99th=[62653] 00:19:43.258 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:19:43.259 slat (usec): min=5, max=22720, avg=258.73, stdev=1380.97 00:19:43.259 clat (usec): min=15265, max=86504, avg=34458.86, stdev=15071.61 00:19:43.259 lat (usec): min=15296, max=86578, avg=34717.58, stdev=15193.28 00:19:43.259 clat percentiles (usec): 00:19:43.259 | 1.00th=[16057], 5.00th=[17957], 10.00th=[19792], 20.00th=[23987], 00:19:43.259 | 30.00th=[24511], 40.00th=[27132], 50.00th=[29754], 60.00th=[33162], 00:19:43.259 | 70.00th=[41681], 80.00th=[43779], 90.00th=[46924], 95.00th=[79168], 00:19:43.259 | 99.00th=[82314], 99.50th=[82314], 99.90th=[83362], 99.95th=[85459], 00:19:43.259 | 99.99th=[86508] 00:19:43.259 bw ( KiB/s): min= 7672, max= 8712, per=18.17%, avg=8192.00, stdev=735.39, samples=2 00:19:43.259 iops : min= 1918, max= 2178, avg=2048.00, stdev=183.85, samples=2 00:19:43.259 lat (msec) : 10=0.54%, 20=6.84%, 50=88.45%, 100=4.17% 00:19:43.259 cpu : usr=2.09%, sys=5.56%, ctx=518, majf=0, minf=13 00:19:43.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:43.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.259 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.259 00:19:43.259 Run status group 0 (all jobs): 00:19:43.259 READ: bw=40.6MiB/s (42.5MB/s), 7190KiB/s-15.8MiB/s (7363kB/s-16.6MB/s), io=41.1MiB (43.1MB), run=1008-1013msec 00:19:43.259 WRITE: bw=44.0MiB/s (46.2MB/s), 8127KiB/s-16.3MiB/s (8322kB/s-17.1MB/s), io=44.6MiB (46.8MB), run=1008-1013msec 00:19:43.259 00:19:43.259 Disk stats (read/write): 00:19:43.259 nvme0n1: ios=2212/2560, merge=0/0, ticks=41169/53463, in_queue=94632, util=88.78% 00:19:43.259 nvme0n2: ios=1585/1768, merge=0/0, ticks=23217/28548, in_queue=51765, util=89.61% 00:19:43.259 nvme0n3: ios=3623/4027, merge=0/0, ticks=44335/45655, in_queue=89990, util=91.30% 00:19:43.259 nvme0n4: ios=1528/1537, merge=0/0, ticks=24085/27636, in_queue=51721, util=88.26% 00:19:43.259 00:38:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:43.259 00:38:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=84685 00:19:43.259 00:38:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:43.259 00:38:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:43.259 [global] 00:19:43.259 thread=1 00:19:43.259 invalidate=1 00:19:43.259 rw=read 00:19:43.259 time_based=1 00:19:43.259 runtime=10 00:19:43.259 ioengine=libaio 00:19:43.259 direct=1 00:19:43.259 bs=4096 00:19:43.259 iodepth=1 00:19:43.259 norandommap=1 00:19:43.259 numjobs=1 00:19:43.259 00:19:43.259 [job0] 00:19:43.259 filename=/dev/nvme0n1 00:19:43.259 [job1] 00:19:43.259 filename=/dev/nvme0n2 00:19:43.259 [job2] 00:19:43.259 filename=/dev/nvme0n3 00:19:43.259 [job3] 00:19:43.259 filename=/dev/nvme0n4 00:19:43.259 Could not set queue depth (nvme0n1) 00:19:43.259 Could not set queue depth (nvme0n2) 00:19:43.259 Could not set queue depth (nvme0n3) 00:19:43.259 Could not set queue depth (nvme0n4) 00:19:43.518 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:43.518 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:43.518 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:43.518 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:43.518 fio-3.35 00:19:43.518 Starting 4 threads 00:19:46.802 00:38:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:46.802 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30449664, buflen=4096 00:19:46.802 fio: pid=84728, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:46.802 00:38:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:46.802 fio: pid=84727, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:46.802 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=30306304, buflen=4096 00:19:46.802 00:38:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:46.802 00:38:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:47.060 fio: pid=84725, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:47.060 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=33226752, buflen=4096 00:19:47.320 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:47.320 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:47.320 fio: pid=84726, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:47.320 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5341184, buflen=4096 00:19:47.579 00:19:47.579 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=84725: Fri Jul 12 00:38:52 2024 00:19:47.579 read: IOPS=2314, BW=9258KiB/s (9480kB/s)(31.7MiB/3505msec) 00:19:47.579 slat (usec): min=7, max=10824, avg=23.28, stdev=206.66 00:19:47.579 clat (usec): min=66, max=4991, avg=406.88, stdev=104.89 00:19:47.579 lat (usec): min=184, max=11064, avg=430.16, stdev=229.81 00:19:47.579 clat percentiles (usec): 00:19:47.579 | 1.00th=[ 186], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 355], 00:19:47.579 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:19:47.579 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 457], 95.00th=[ 465], 00:19:47.579 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 1205], 99.95th=[ 1614], 00:19:47.579 | 99.99th=[ 5014] 00:19:47.579 bw ( KiB/s): min= 8448, max=10304, per=21.51%, avg=8949.33, stdev=674.84, samples=6 00:19:47.579 iops : min= 2112, max= 2576, avg=2237.33, stdev=168.71, samples=6 00:19:47.579 lat (usec) : 100=0.01%, 250=3.75%, 500=95.45%, 750=0.59%, 1000=0.06% 00:19:47.579 lat (msec) : 2=0.07%, 4=0.01%, 10=0.04% 00:19:47.579 cpu : usr=0.88%, sys=3.91%, ctx=8133, majf=0, minf=1 00:19:47.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 issued rwts: total=8113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.579 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=84726: Fri Jul 12 00:38:52 2024 00:19:47.579 read: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(69.1MiB/3907msec) 00:19:47.579 slat (usec): min=11, max=14896, avg=18.37, stdev=216.34 00:19:47.579 clat (usec): min=162, max=2725, avg=201.05, stdev=41.52 00:19:47.579 lat (usec): min=179, max=15479, avg=219.42, stdev=223.30 00:19:47.579 clat percentiles (usec): 00:19:47.579 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:19:47.579 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 198], 00:19:47.579 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 221], 00:19:47.579 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 441], 99.95th=[ 586], 00:19:47.579 | 99.99th=[ 2180] 00:19:47.579 bw ( KiB/s): min=14532, max=18944, per=43.58%, avg=18128.57, stdev=1608.74, samples=7 00:19:47.579 iops : min= 3633, max= 4736, avg=4532.14, stdev=402.19, samples=7 00:19:47.579 lat (usec) : 250=96.58%, 500=3.35%, 750=0.03% 00:19:47.579 lat (msec) : 2=0.02%, 4=0.02% 00:19:47.579 cpu : usr=1.25%, sys=5.30%, ctx=17701, majf=0, minf=1 00:19:47.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 issued rwts: total=17689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.579 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=84727: Fri Jul 12 00:38:52 2024 00:19:47.579 read: IOPS=2263, BW=9054KiB/s (9271kB/s)(28.9MiB/3269msec) 00:19:47.579 slat (usec): min=7, max=8088, avg=20.80, stdev=129.01 00:19:47.579 clat (usec): min=190, max=4174, avg=418.75, stdev=83.54 00:19:47.579 lat (usec): min=207, max=8457, avg=439.55, stdev=152.29 00:19:47.579 clat percentiles (usec): 00:19:47.579 | 1.00th=[ 223], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 363], 00:19:47.579 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 441], 00:19:47.579 | 70.00th=[ 445], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:19:47.579 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 1037], 99.95th=[ 1483], 00:19:47.579 | 99.99th=[ 4178] 00:19:47.579 bw ( KiB/s): min= 8456, max=10264, per=21.50%, avg=8944.00, stdev=657.60, samples=6 00:19:47.579 iops : min= 2114, max= 2566, avg=2236.00, stdev=164.40, samples=6 00:19:47.579 lat (usec) : 250=1.34%, 500=97.89%, 750=0.61%, 1000=0.03% 00:19:47.579 lat (msec) : 2=0.08%, 4=0.03%, 10=0.01% 00:19:47.579 cpu : usr=1.13%, sys=3.79%, ctx=7415, majf=0, minf=1 00:19:47.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 issued rwts: total=7400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.579 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=84728: Fri Jul 12 00:38:52 2024 00:19:47.579 read: IOPS=2456, BW=9824KiB/s (10.1MB/s)(29.0MiB/3027msec) 00:19:47.579 slat (usec): min=8, max=151, avg=15.04, stdev= 5.82 00:19:47.579 clat (usec): min=189, max=7907, avg=390.16, stdev=171.56 00:19:47.579 lat (usec): min=205, max=7922, avg=405.20, stdev=170.99 00:19:47.579 clat percentiles (usec): 00:19:47.579 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 221], 00:19:47.579 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 445], 00:19:47.579 | 70.00th=[ 449], 80.00th=[ 453], 90.00th=[ 465], 95.00th=[ 469], 00:19:47.579 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 1287], 99.95th=[ 3589], 00:19:47.579 | 99.99th=[ 7898] 00:19:47.579 bw ( KiB/s): min= 8448, max=15768, per=23.70%, avg=9860.00, stdev=2896.90, samples=6 00:19:47.579 iops : min= 2112, max= 3942, avg=2465.00, stdev=724.23, samples=6 00:19:47.579 lat (usec) : 250=24.99%, 500=74.28%, 750=0.54%, 1000=0.03% 00:19:47.579 lat (msec) : 2=0.07%, 4=0.05%, 10=0.03% 00:19:47.579 cpu : usr=0.66%, sys=3.57%, ctx=7441, majf=0, minf=1 00:19:47.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.579 issued rwts: total=7435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.579 00:19:47.579 Run status group 0 (all jobs): 00:19:47.579 READ: bw=40.6MiB/s (42.6MB/s), 9054KiB/s-17.7MiB/s (9271kB/s-18.5MB/s), io=159MiB (166MB), run=3027-3907msec 00:19:47.579 00:19:47.579 Disk stats (read/write): 00:19:47.579 nvme0n1: ios=7702/0, merge=0/0, ticks=3178/0, in_queue=3178, util=95.45% 00:19:47.579 nvme0n2: ios=17526/0, merge=0/0, ticks=3590/0, in_queue=3590, util=95.34% 00:19:47.579 nvme0n3: ios=6987/0, merge=0/0, ticks=2885/0, in_queue=2885, util=96.37% 00:19:47.579 nvme0n4: ios=7120/0, merge=0/0, ticks=2633/0, in_queue=2633, util=96.53% 00:19:47.579 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:47.579 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:48.147 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.147 00:38:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:48.712 00:38:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.713 00:38:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:48.969 00:38:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:48.969 00:38:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:49.534 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:49.534 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 84685 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:49.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:49.792 nvmf hotplug test: fio failed as expected 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:49.792 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.049 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:50.049 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.307 00:38:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.307 rmmod nvme_tcp 00:19:50.307 rmmod nvme_fabrics 00:19:50.307 rmmod nvme_keyring 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 84192 ']' 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 84192 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 84192 ']' 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 84192 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84192 00:19:50.307 killing process with pid 84192 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84192' 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 84192 00:19:50.307 00:38:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 84192 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.679 ************************************ 00:19:51.679 END TEST nvmf_fio_target 00:19:51.679 ************************************ 00:19:51.679 00:19:51.679 real 0m22.255s 00:19:51.679 user 1m24.381s 00:19:51.679 sys 0m8.177s 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.679 00:38:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.679 00:38:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.679 00:38:56 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:51.679 00:38:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:51.679 00:38:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.679 00:38:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.679 ************************************ 00:19:51.679 START TEST nvmf_bdevio 00:19:51.679 ************************************ 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:51.679 * Looking for test storage... 00:19:51.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.679 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:51.680 Cannot find device "nvmf_tgt_br" 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.680 Cannot find device "nvmf_tgt_br2" 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:51.680 Cannot find device "nvmf_tgt_br" 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:51.680 Cannot find device "nvmf_tgt_br2" 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:51.680 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.938 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:51.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:51.939 00:19:51.939 --- 10.0.0.2 ping statistics --- 00:19:51.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.939 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:51.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:51.939 00:19:51.939 --- 10.0.0.3 ping statistics --- 00:19:51.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.939 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:51.939 00:19:51.939 --- 10.0.0.1 ping statistics --- 00:19:51.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.939 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=85077 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 85077 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 85077 ']' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.939 00:38:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.196 [2024-07-12 00:38:56.965818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:52.196 [2024-07-12 00:38:56.965969] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.454 [2024-07-12 00:38:57.135764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.712 [2024-07-12 00:38:57.408146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.712 [2024-07-12 00:38:57.408211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.712 [2024-07-12 00:38:57.408229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.712 [2024-07-12 00:38:57.408244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.712 [2024-07-12 00:38:57.408257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.712 [2024-07-12 00:38:57.408450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.712 [2024-07-12 00:38:57.408534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:52.712 [2024-07-12 00:38:57.410440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:52.712 [2024-07-12 00:38:57.410462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.970 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 [2024-07-12 00:38:57.908327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.228 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 00:38:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.228 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.228 00:38:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 Malloc0 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 [2024-07-12 00:38:58.035554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.228 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.228 { 00:19:53.228 "params": { 00:19:53.228 "name": "Nvme$subsystem", 00:19:53.228 "trtype": "$TEST_TRANSPORT", 00:19:53.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.228 "adrfam": "ipv4", 00:19:53.229 "trsvcid": "$NVMF_PORT", 00:19:53.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.229 "hdgst": ${hdgst:-false}, 00:19:53.229 "ddgst": ${ddgst:-false} 00:19:53.229 }, 00:19:53.229 "method": "bdev_nvme_attach_controller" 00:19:53.229 } 00:19:53.229 EOF 00:19:53.229 )") 00:19:53.229 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:53.229 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:53.229 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:53.229 00:38:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.229 "params": { 00:19:53.229 "name": "Nvme1", 00:19:53.229 "trtype": "tcp", 00:19:53.229 "traddr": "10.0.0.2", 00:19:53.229 "adrfam": "ipv4", 00:19:53.229 "trsvcid": "4420", 00:19:53.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.229 "hdgst": false, 00:19:53.229 "ddgst": false 00:19:53.229 }, 00:19:53.229 "method": "bdev_nvme_attach_controller" 00:19:53.229 }' 00:19:53.229 [2024-07-12 00:38:58.135654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:53.229 [2024-07-12 00:38:58.135830] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85131 ] 00:19:53.487 [2024-07-12 00:38:58.302701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.745 [2024-07-12 00:38:58.551011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.745 [2024-07-12 00:38:58.551142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.745 [2024-07-12 00:38:58.551156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.313 I/O targets: 00:19:54.313 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:54.313 00:19:54.313 00:19:54.313 CUnit - A unit testing framework for C - Version 2.1-3 00:19:54.313 http://cunit.sourceforge.net/ 00:19:54.313 00:19:54.313 00:19:54.313 Suite: bdevio tests on: Nvme1n1 00:19:54.313 Test: blockdev write read block ...passed 00:19:54.313 Test: blockdev write zeroes read block ...passed 00:19:54.313 Test: blockdev write zeroes read no split ...passed 00:19:54.313 Test: blockdev write zeroes read split ...passed 00:19:54.313 Test: blockdev write zeroes read split partial ...passed 00:19:54.313 Test: blockdev reset ...[2024-07-12 00:38:59.120042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.313 [2024-07-12 00:38:59.120241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:19:54.313 [2024-07-12 00:38:59.132752] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.313 passed 00:19:54.313 Test: blockdev write read 8 blocks ...passed 00:19:54.313 Test: blockdev write read size > 128k ...passed 00:19:54.313 Test: blockdev write read invalid size ...passed 00:19:54.313 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.313 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.313 Test: blockdev write read max offset ...passed 00:19:54.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.572 Test: blockdev writev readv 8 blocks ...passed 00:19:54.572 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.572 Test: blockdev writev readv block ...passed 00:19:54.572 Test: blockdev writev readv size > 128k ...passed 00:19:54.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.572 Test: blockdev comparev and writev ...[2024-07-12 00:38:59.308752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.308830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.308861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.308878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.309370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.309432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.309462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.309478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.310091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.310126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.310152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.310169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.310607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.310645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.310681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.572 [2024-07-12 00:38:59.310698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:54.572 passed 00:19:54.572 Test: blockdev nvme passthru rw ...passed 00:19:54.572 Test: blockdev nvme passthru vendor specific ...[2024-07-12 00:38:59.394068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.572 [2024-07-12 00:38:59.394144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.394339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.572 [2024-07-12 00:38:59.394363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.394558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.572 [2024-07-12 00:38:59.394592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:54.572 [2024-07-12 00:38:59.394745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.572 [2024-07-12 00:38:59.394776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:54.572 passed 00:19:54.572 Test: blockdev nvme admin passthru ...passed 00:19:54.572 Test: blockdev copy ...passed 00:19:54.572 00:19:54.572 Run Summary: Type Total Ran Passed Failed Inactive 00:19:54.572 suites 1 1 n/a 0 0 00:19:54.572 tests 23 23 23 0 0 00:19:54.572 asserts 152 152 152 0 n/a 00:19:54.572 00:19:54.572 Elapsed time = 1.038 seconds 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.947 rmmod nvme_tcp 00:19:55.947 rmmod nvme_fabrics 00:19:55.947 rmmod nvme_keyring 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 85077 ']' 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 85077 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 85077 ']' 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 85077 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85077 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:55.947 killing process with pid 85077 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85077' 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 85077 00:19:55.947 00:39:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 85077 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.324 00:39:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.325 00:39:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.583 00:39:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.583 00:19:57.583 real 0m5.886s 00:19:57.583 user 0m22.909s 00:19:57.583 sys 0m1.141s 00:19:57.583 00:39:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.583 00:39:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:57.583 ************************************ 00:19:57.583 END TEST nvmf_bdevio 00:19:57.583 ************************************ 00:19:57.583 00:39:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:57.583 00:39:02 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:57.583 00:39:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:57.583 00:39:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.583 00:39:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.583 ************************************ 00:19:57.583 START TEST nvmf_auth_target 00:19:57.583 ************************************ 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:57.583 * Looking for test storage... 00:19:57.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.583 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.584 Cannot find device "nvmf_tgt_br" 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.584 Cannot find device "nvmf_tgt_br2" 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.584 Cannot find device "nvmf_tgt_br" 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.584 Cannot find device "nvmf_tgt_br2" 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:19:57.584 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:57.842 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:57.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:57.843 00:19:57.843 --- 10.0.0.2 ping statistics --- 00:19:57.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.843 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:57.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:57.843 00:19:57.843 --- 10.0.0.3 ping statistics --- 00:19:57.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.843 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:57.843 00:19:57.843 --- 10.0.0.1 ping statistics --- 00:19:57.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.843 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.843 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=85363 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 85363 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85363 ']' 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.141 00:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=85407 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=842283c8362831d7249f072a47577fb827e72050aaeceee2 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9Pm 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 842283c8362831d7249f072a47577fb827e72050aaeceee2 0 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 842283c8362831d7249f072a47577fb827e72050aaeceee2 0 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=842283c8362831d7249f072a47577fb827e72050aaeceee2 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9Pm 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9Pm 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.9Pm 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b233f81af8c9b9e357265ab14f5725e05ded000fa39053d603eaef29a41de62e 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SAD 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b233f81af8c9b9e357265ab14f5725e05ded000fa39053d603eaef29a41de62e 3 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b233f81af8c9b9e357265ab14f5725e05ded000fa39053d603eaef29a41de62e 3 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b233f81af8c9b9e357265ab14f5725e05ded000fa39053d603eaef29a41de62e 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:59.072 00:39:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SAD 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SAD 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.SAD 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:59.330 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc95e5176274b6dfa5158d236dcb9d88 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AKQ 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc95e5176274b6dfa5158d236dcb9d88 1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc95e5176274b6dfa5158d236dcb9d88 1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc95e5176274b6dfa5158d236dcb9d88 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AKQ 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AKQ 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.AKQ 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ac3ca835fdd9437a16f04f942df0f3e172e3312b7237ab79 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2xX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ac3ca835fdd9437a16f04f942df0f3e172e3312b7237ab79 2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ac3ca835fdd9437a16f04f942df0f3e172e3312b7237ab79 2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ac3ca835fdd9437a16f04f942df0f3e172e3312b7237ab79 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2xX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2xX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.2xX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6620d9aa1590d58a20906186b7c42f14fc003cb421b5fb5b 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2Xv 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6620d9aa1590d58a20906186b7c42f14fc003cb421b5fb5b 2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6620d9aa1590d58a20906186b7c42f14fc003cb421b5fb5b 2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6620d9aa1590d58a20906186b7c42f14fc003cb421b5fb5b 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2Xv 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2Xv 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.2Xv 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=501e12b488de9a37e17e80555ab265c8 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9OO 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 501e12b488de9a37e17e80555ab265c8 1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 501e12b488de9a37e17e80555ab265c8 1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=501e12b488de9a37e17e80555ab265c8 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:59.331 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9OO 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9OO 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9OO 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7943f6af12617e8c8d06f8a7d0b77d10136e8e87910256191188321e5091b496 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CYb 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7943f6af12617e8c8d06f8a7d0b77d10136e8e87910256191188321e5091b496 3 00:19:59.589 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7943f6af12617e8c8d06f8a7d0b77d10136e8e87910256191188321e5091b496 3 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7943f6af12617e8c8d06f8a7d0b77d10136e8e87910256191188321e5091b496 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CYb 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CYb 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.CYb 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 85363 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85363 ']' 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.590 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 85407 /var/tmp/host.sock 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 85407 ']' 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:59.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.866 00:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9Pm 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.9Pm 00:20:00.433 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.9Pm 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.SAD ]] 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAD 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAD 00:20:00.998 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAD 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AKQ 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AKQ 00:20:01.256 00:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AKQ 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.2xX ]] 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2xX 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2xX 00:20:01.513 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2xX 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2Xv 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2Xv 00:20:01.773 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2Xv 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9OO ]] 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9OO 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9OO 00:20:02.032 00:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9OO 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CYb 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CYb 00:20:02.290 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CYb 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.548 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.806 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.063 00:20:03.063 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.063 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.063 00:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.322 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.322 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.322 00:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.322 00:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.580 { 00:20:03.580 "auth": { 00:20:03.580 "dhgroup": "null", 00:20:03.580 "digest": "sha256", 00:20:03.580 "state": "completed" 00:20:03.580 }, 00:20:03.580 "cntlid": 1, 00:20:03.580 "listen_address": { 00:20:03.580 "adrfam": "IPv4", 00:20:03.580 "traddr": "10.0.0.2", 00:20:03.580 "trsvcid": "4420", 00:20:03.580 "trtype": "TCP" 00:20:03.580 }, 00:20:03.580 "peer_address": { 00:20:03.580 "adrfam": "IPv4", 00:20:03.580 "traddr": "10.0.0.1", 00:20:03.580 "trsvcid": "38950", 00:20:03.580 "trtype": "TCP" 00:20:03.580 }, 00:20:03.580 "qid": 0, 00:20:03.580 "state": "enabled", 00:20:03.580 "thread": "nvmf_tgt_poll_group_000" 00:20:03.580 } 00:20:03.580 ]' 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.580 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.838 00:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.141 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.141 00:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.398 { 00:20:09.398 "auth": { 00:20:09.398 "dhgroup": "null", 00:20:09.398 "digest": "sha256", 00:20:09.398 "state": "completed" 00:20:09.398 }, 00:20:09.398 "cntlid": 3, 00:20:09.398 "listen_address": { 00:20:09.398 "adrfam": "IPv4", 00:20:09.398 "traddr": "10.0.0.2", 00:20:09.398 "trsvcid": "4420", 00:20:09.398 "trtype": "TCP" 00:20:09.398 }, 00:20:09.398 "peer_address": { 00:20:09.398 "adrfam": "IPv4", 00:20:09.398 "traddr": "10.0.0.1", 00:20:09.398 "trsvcid": "50124", 00:20:09.398 "trtype": "TCP" 00:20:09.398 }, 00:20:09.398 "qid": 0, 00:20:09.398 "state": "enabled", 00:20:09.398 "thread": "nvmf_tgt_poll_group_000" 00:20:09.398 } 00:20:09.398 ]' 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:09.398 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.656 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.656 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.656 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.914 00:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:10.845 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.846 00:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.411 00:20:11.411 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.411 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.411 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.669 { 00:20:11.669 "auth": { 00:20:11.669 "dhgroup": "null", 00:20:11.669 "digest": "sha256", 00:20:11.669 "state": "completed" 00:20:11.669 }, 00:20:11.669 "cntlid": 5, 00:20:11.669 "listen_address": { 00:20:11.669 "adrfam": "IPv4", 00:20:11.669 "traddr": "10.0.0.2", 00:20:11.669 "trsvcid": "4420", 00:20:11.669 "trtype": "TCP" 00:20:11.669 }, 00:20:11.669 "peer_address": { 00:20:11.669 "adrfam": "IPv4", 00:20:11.669 "traddr": "10.0.0.1", 00:20:11.669 "trsvcid": "50138", 00:20:11.669 "trtype": "TCP" 00:20:11.669 }, 00:20:11.669 "qid": 0, 00:20:11.669 "state": "enabled", 00:20:11.669 "thread": "nvmf_tgt_poll_group_000" 00:20:11.669 } 00:20:11.669 ]' 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.669 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.927 00:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.861 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.120 00:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.378 00:20:13.378 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.378 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.378 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.636 { 00:20:13.636 "auth": { 00:20:13.636 "dhgroup": "null", 00:20:13.636 "digest": "sha256", 00:20:13.636 "state": "completed" 00:20:13.636 }, 00:20:13.636 "cntlid": 7, 00:20:13.636 "listen_address": { 00:20:13.636 "adrfam": "IPv4", 00:20:13.636 "traddr": "10.0.0.2", 00:20:13.636 "trsvcid": "4420", 00:20:13.636 "trtype": "TCP" 00:20:13.636 }, 00:20:13.636 "peer_address": { 00:20:13.636 "adrfam": "IPv4", 00:20:13.636 "traddr": "10.0.0.1", 00:20:13.636 "trsvcid": "50172", 00:20:13.636 "trtype": "TCP" 00:20:13.636 }, 00:20:13.636 "qid": 0, 00:20:13.636 "state": "enabled", 00:20:13.636 "thread": "nvmf_tgt_poll_group_000" 00:20:13.636 } 00:20:13.636 ]' 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.636 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.893 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.893 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.893 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.151 00:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:14.717 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.717 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:14.717 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.717 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.975 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.233 00:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.233 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.233 00:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.491 00:20:15.491 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.491 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.491 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.750 { 00:20:15.750 "auth": { 00:20:15.750 "dhgroup": "ffdhe2048", 00:20:15.750 "digest": "sha256", 00:20:15.750 "state": "completed" 00:20:15.750 }, 00:20:15.750 "cntlid": 9, 00:20:15.750 "listen_address": { 00:20:15.750 "adrfam": "IPv4", 00:20:15.750 "traddr": "10.0.0.2", 00:20:15.750 "trsvcid": "4420", 00:20:15.750 "trtype": "TCP" 00:20:15.750 }, 00:20:15.750 "peer_address": { 00:20:15.750 "adrfam": "IPv4", 00:20:15.750 "traddr": "10.0.0.1", 00:20:15.750 "trsvcid": "50196", 00:20:15.750 "trtype": "TCP" 00:20:15.750 }, 00:20:15.750 "qid": 0, 00:20:15.750 "state": "enabled", 00:20:15.750 "thread": "nvmf_tgt_poll_group_000" 00:20:15.750 } 00:20:15.750 ]' 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.750 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.010 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.010 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.010 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.010 00:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:16.945 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.945 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:16.945 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.946 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.946 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.946 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.946 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.946 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.207 00:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.465 00:20:17.465 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.465 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.465 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.724 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.724 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.724 00:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.724 00:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.982 { 00:20:17.982 "auth": { 00:20:17.982 "dhgroup": "ffdhe2048", 00:20:17.982 "digest": "sha256", 00:20:17.982 "state": "completed" 00:20:17.982 }, 00:20:17.982 "cntlid": 11, 00:20:17.982 "listen_address": { 00:20:17.982 "adrfam": "IPv4", 00:20:17.982 "traddr": "10.0.0.2", 00:20:17.982 "trsvcid": "4420", 00:20:17.982 "trtype": "TCP" 00:20:17.982 }, 00:20:17.982 "peer_address": { 00:20:17.982 "adrfam": "IPv4", 00:20:17.982 "traddr": "10.0.0.1", 00:20:17.982 "trsvcid": "50222", 00:20:17.982 "trtype": "TCP" 00:20:17.982 }, 00:20:17.982 "qid": 0, 00:20:17.982 "state": "enabled", 00:20:17.982 "thread": "nvmf_tgt_poll_group_000" 00:20:17.982 } 00:20:17.982 ]' 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.982 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.983 00:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.242 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.175 00:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.435 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.693 00:20:19.693 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.693 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.693 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.952 { 00:20:19.952 "auth": { 00:20:19.952 "dhgroup": "ffdhe2048", 00:20:19.952 "digest": "sha256", 00:20:19.952 "state": "completed" 00:20:19.952 }, 00:20:19.952 "cntlid": 13, 00:20:19.952 "listen_address": { 00:20:19.952 "adrfam": "IPv4", 00:20:19.952 "traddr": "10.0.0.2", 00:20:19.952 "trsvcid": "4420", 00:20:19.952 "trtype": "TCP" 00:20:19.952 }, 00:20:19.952 "peer_address": { 00:20:19.952 "adrfam": "IPv4", 00:20:19.952 "traddr": "10.0.0.1", 00:20:19.952 "trsvcid": "34394", 00:20:19.952 "trtype": "TCP" 00:20:19.952 }, 00:20:19.952 "qid": 0, 00:20:19.952 "state": "enabled", 00:20:19.952 "thread": "nvmf_tgt_poll_group_000" 00:20:19.952 } 00:20:19.952 ]' 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.952 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.210 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.210 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.210 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.210 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.210 00:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.467 00:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.402 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.660 00:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.660 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.660 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.919 00:20:21.919 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.919 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.919 00:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.179 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.180 { 00:20:22.180 "auth": { 00:20:22.180 "dhgroup": "ffdhe2048", 00:20:22.180 "digest": "sha256", 00:20:22.180 "state": "completed" 00:20:22.180 }, 00:20:22.180 "cntlid": 15, 00:20:22.180 "listen_address": { 00:20:22.180 "adrfam": "IPv4", 00:20:22.180 "traddr": "10.0.0.2", 00:20:22.180 "trsvcid": "4420", 00:20:22.180 "trtype": "TCP" 00:20:22.180 }, 00:20:22.180 "peer_address": { 00:20:22.180 "adrfam": "IPv4", 00:20:22.180 "traddr": "10.0.0.1", 00:20:22.180 "trsvcid": "34428", 00:20:22.180 "trtype": "TCP" 00:20:22.180 }, 00:20:22.180 "qid": 0, 00:20:22.180 "state": "enabled", 00:20:22.180 "thread": "nvmf_tgt_poll_group_000" 00:20:22.180 } 00:20:22.180 ]' 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.180 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.437 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.437 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.437 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.437 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.437 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.695 00:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.628 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.193 00:20:24.193 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.193 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.193 00:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.450 { 00:20:24.450 "auth": { 00:20:24.450 "dhgroup": "ffdhe3072", 00:20:24.450 "digest": "sha256", 00:20:24.450 "state": "completed" 00:20:24.450 }, 00:20:24.450 "cntlid": 17, 00:20:24.450 "listen_address": { 00:20:24.450 "adrfam": "IPv4", 00:20:24.450 "traddr": "10.0.0.2", 00:20:24.450 "trsvcid": "4420", 00:20:24.450 "trtype": "TCP" 00:20:24.450 }, 00:20:24.450 "peer_address": { 00:20:24.450 "adrfam": "IPv4", 00:20:24.450 "traddr": "10.0.0.1", 00:20:24.450 "trsvcid": "34452", 00:20:24.450 "trtype": "TCP" 00:20:24.450 }, 00:20:24.450 "qid": 0, 00:20:24.450 "state": "enabled", 00:20:24.450 "thread": "nvmf_tgt_poll_group_000" 00:20:24.450 } 00:20:24.450 ]' 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.450 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.015 00:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.580 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.581 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.838 00:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.402 00:20:26.402 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.402 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.402 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.660 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.660 { 00:20:26.660 "auth": { 00:20:26.660 "dhgroup": "ffdhe3072", 00:20:26.660 "digest": "sha256", 00:20:26.660 "state": "completed" 00:20:26.660 }, 00:20:26.660 "cntlid": 19, 00:20:26.660 "listen_address": { 00:20:26.660 "adrfam": "IPv4", 00:20:26.660 "traddr": "10.0.0.2", 00:20:26.660 "trsvcid": "4420", 00:20:26.660 "trtype": "TCP" 00:20:26.660 }, 00:20:26.660 "peer_address": { 00:20:26.660 "adrfam": "IPv4", 00:20:26.660 "traddr": "10.0.0.1", 00:20:26.660 "trsvcid": "34486", 00:20:26.660 "trtype": "TCP" 00:20:26.660 }, 00:20:26.660 "qid": 0, 00:20:26.660 "state": "enabled", 00:20:26.660 "thread": "nvmf_tgt_poll_group_000" 00:20:26.660 } 00:20:26.660 ]' 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.661 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.918 00:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.849 00:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.413 00:20:28.413 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.413 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.413 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.671 { 00:20:28.671 "auth": { 00:20:28.671 "dhgroup": "ffdhe3072", 00:20:28.671 "digest": "sha256", 00:20:28.671 "state": "completed" 00:20:28.671 }, 00:20:28.671 "cntlid": 21, 00:20:28.671 "listen_address": { 00:20:28.671 "adrfam": "IPv4", 00:20:28.671 "traddr": "10.0.0.2", 00:20:28.671 "trsvcid": "4420", 00:20:28.671 "trtype": "TCP" 00:20:28.671 }, 00:20:28.671 "peer_address": { 00:20:28.671 "adrfam": "IPv4", 00:20:28.671 "traddr": "10.0.0.1", 00:20:28.671 "trsvcid": "34500", 00:20:28.671 "trtype": "TCP" 00:20:28.671 }, 00:20:28.671 "qid": 0, 00:20:28.671 "state": "enabled", 00:20:28.671 "thread": "nvmf_tgt_poll_group_000" 00:20:28.671 } 00:20:28.671 ]' 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.671 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.929 00:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.864 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.122 00:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.380 00:20:30.380 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.380 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.380 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.639 { 00:20:30.639 "auth": { 00:20:30.639 "dhgroup": "ffdhe3072", 00:20:30.639 "digest": "sha256", 00:20:30.639 "state": "completed" 00:20:30.639 }, 00:20:30.639 "cntlid": 23, 00:20:30.639 "listen_address": { 00:20:30.639 "adrfam": "IPv4", 00:20:30.639 "traddr": "10.0.0.2", 00:20:30.639 "trsvcid": "4420", 00:20:30.639 "trtype": "TCP" 00:20:30.639 }, 00:20:30.639 "peer_address": { 00:20:30.639 "adrfam": "IPv4", 00:20:30.639 "traddr": "10.0.0.1", 00:20:30.639 "trsvcid": "34260", 00:20:30.639 "trtype": "TCP" 00:20:30.639 }, 00:20:30.639 "qid": 0, 00:20:30.639 "state": "enabled", 00:20:30.639 "thread": "nvmf_tgt_poll_group_000" 00:20:30.639 } 00:20:30.639 ]' 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.639 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.900 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.900 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.900 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.900 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.900 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.162 00:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:31.729 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.988 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.247 00:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.505 00:20:32.505 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.505 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.505 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.763 { 00:20:32.763 "auth": { 00:20:32.763 "dhgroup": "ffdhe4096", 00:20:32.763 "digest": "sha256", 00:20:32.763 "state": "completed" 00:20:32.763 }, 00:20:32.763 "cntlid": 25, 00:20:32.763 "listen_address": { 00:20:32.763 "adrfam": "IPv4", 00:20:32.763 "traddr": "10.0.0.2", 00:20:32.763 "trsvcid": "4420", 00:20:32.763 "trtype": "TCP" 00:20:32.763 }, 00:20:32.763 "peer_address": { 00:20:32.763 "adrfam": "IPv4", 00:20:32.763 "traddr": "10.0.0.1", 00:20:32.763 "trsvcid": "34278", 00:20:32.763 "trtype": "TCP" 00:20:32.763 }, 00:20:32.763 "qid": 0, 00:20:32.763 "state": "enabled", 00:20:32.763 "thread": "nvmf_tgt_poll_group_000" 00:20:32.763 } 00:20:32.763 ]' 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.763 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.022 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.022 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.022 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.022 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.022 00:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.280 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.847 00:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.106 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.107 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.365 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.365 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.365 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.365 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.625 00:20:34.625 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.625 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.625 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.883 { 00:20:34.883 "auth": { 00:20:34.883 "dhgroup": "ffdhe4096", 00:20:34.883 "digest": "sha256", 00:20:34.883 "state": "completed" 00:20:34.883 }, 00:20:34.883 "cntlid": 27, 00:20:34.883 "listen_address": { 00:20:34.883 "adrfam": "IPv4", 00:20:34.883 "traddr": "10.0.0.2", 00:20:34.883 "trsvcid": "4420", 00:20:34.883 "trtype": "TCP" 00:20:34.883 }, 00:20:34.883 "peer_address": { 00:20:34.883 "adrfam": "IPv4", 00:20:34.883 "traddr": "10.0.0.1", 00:20:34.883 "trsvcid": "34324", 00:20:34.883 "trtype": "TCP" 00:20:34.883 }, 00:20:34.883 "qid": 0, 00:20:34.883 "state": "enabled", 00:20:34.883 "thread": "nvmf_tgt_poll_group_000" 00:20:34.883 } 00:20:34.883 ]' 00:20:34.883 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.142 00:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.402 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:35.983 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.983 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:35.983 00:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.983 00:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.268 00:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.268 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.268 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.268 00:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.528 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.787 00:20:36.787 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.787 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.787 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.045 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.303 { 00:20:37.303 "auth": { 00:20:37.303 "dhgroup": "ffdhe4096", 00:20:37.303 "digest": "sha256", 00:20:37.303 "state": "completed" 00:20:37.303 }, 00:20:37.303 "cntlid": 29, 00:20:37.303 "listen_address": { 00:20:37.303 "adrfam": "IPv4", 00:20:37.303 "traddr": "10.0.0.2", 00:20:37.303 "trsvcid": "4420", 00:20:37.303 "trtype": "TCP" 00:20:37.303 }, 00:20:37.303 "peer_address": { 00:20:37.303 "adrfam": "IPv4", 00:20:37.303 "traddr": "10.0.0.1", 00:20:37.303 "trsvcid": "34350", 00:20:37.303 "trtype": "TCP" 00:20:37.303 }, 00:20:37.303 "qid": 0, 00:20:37.303 "state": "enabled", 00:20:37.303 "thread": "nvmf_tgt_poll_group_000" 00:20:37.303 } 00:20:37.303 ]' 00:20:37.303 00:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.303 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.561 00:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.496 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.497 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.755 00:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.322 00:20:39.322 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.322 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.322 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.580 { 00:20:39.580 "auth": { 00:20:39.580 "dhgroup": "ffdhe4096", 00:20:39.580 "digest": "sha256", 00:20:39.580 "state": "completed" 00:20:39.580 }, 00:20:39.580 "cntlid": 31, 00:20:39.580 "listen_address": { 00:20:39.580 "adrfam": "IPv4", 00:20:39.580 "traddr": "10.0.0.2", 00:20:39.580 "trsvcid": "4420", 00:20:39.580 "trtype": "TCP" 00:20:39.580 }, 00:20:39.580 "peer_address": { 00:20:39.580 "adrfam": "IPv4", 00:20:39.580 "traddr": "10.0.0.1", 00:20:39.580 "trsvcid": "56898", 00:20:39.580 "trtype": "TCP" 00:20:39.580 }, 00:20:39.580 "qid": 0, 00:20:39.580 "state": "enabled", 00:20:39.580 "thread": "nvmf_tgt_poll_group_000" 00:20:39.580 } 00:20:39.580 ]' 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.580 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.147 00:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.773 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.032 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.033 00:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.033 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.033 00:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.600 00:20:41.600 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.600 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.600 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.857 { 00:20:41.857 "auth": { 00:20:41.857 "dhgroup": "ffdhe6144", 00:20:41.857 "digest": "sha256", 00:20:41.857 "state": "completed" 00:20:41.857 }, 00:20:41.857 "cntlid": 33, 00:20:41.857 "listen_address": { 00:20:41.857 "adrfam": "IPv4", 00:20:41.857 "traddr": "10.0.0.2", 00:20:41.857 "trsvcid": "4420", 00:20:41.857 "trtype": "TCP" 00:20:41.857 }, 00:20:41.857 "peer_address": { 00:20:41.857 "adrfam": "IPv4", 00:20:41.857 "traddr": "10.0.0.1", 00:20:41.857 "trsvcid": "56920", 00:20:41.857 "trtype": "TCP" 00:20:41.857 }, 00:20:41.857 "qid": 0, 00:20:41.857 "state": "enabled", 00:20:41.857 "thread": "nvmf_tgt_poll_group_000" 00:20:41.857 } 00:20:41.857 ]' 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.857 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.858 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.858 00:39:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.421 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.986 00:39:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.243 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.244 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.244 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.810 00:20:43.810 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.810 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.810 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.069 { 00:20:44.069 "auth": { 00:20:44.069 "dhgroup": "ffdhe6144", 00:20:44.069 "digest": "sha256", 00:20:44.069 "state": "completed" 00:20:44.069 }, 00:20:44.069 "cntlid": 35, 00:20:44.069 "listen_address": { 00:20:44.069 "adrfam": "IPv4", 00:20:44.069 "traddr": "10.0.0.2", 00:20:44.069 "trsvcid": "4420", 00:20:44.069 "trtype": "TCP" 00:20:44.069 }, 00:20:44.069 "peer_address": { 00:20:44.069 "adrfam": "IPv4", 00:20:44.069 "traddr": "10.0.0.1", 00:20:44.069 "trsvcid": "56952", 00:20:44.069 "trtype": "TCP" 00:20:44.069 }, 00:20:44.069 "qid": 0, 00:20:44.069 "state": "enabled", 00:20:44.069 "thread": "nvmf_tgt_poll_group_000" 00:20:44.069 } 00:20:44.069 ]' 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.069 00:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.373 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.309 00:39:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.309 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.875 00:20:45.875 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.875 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.875 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.133 { 00:20:46.133 "auth": { 00:20:46.133 "dhgroup": "ffdhe6144", 00:20:46.133 "digest": "sha256", 00:20:46.133 "state": "completed" 00:20:46.133 }, 00:20:46.133 "cntlid": 37, 00:20:46.133 "listen_address": { 00:20:46.133 "adrfam": "IPv4", 00:20:46.133 "traddr": "10.0.0.2", 00:20:46.133 "trsvcid": "4420", 00:20:46.133 "trtype": "TCP" 00:20:46.133 }, 00:20:46.133 "peer_address": { 00:20:46.133 "adrfam": "IPv4", 00:20:46.133 "traddr": "10.0.0.1", 00:20:46.133 "trsvcid": "56980", 00:20:46.133 "trtype": "TCP" 00:20:46.133 }, 00:20:46.133 "qid": 0, 00:20:46.133 "state": "enabled", 00:20:46.133 "thread": "nvmf_tgt_poll_group_000" 00:20:46.133 } 00:20:46.133 ]' 00:20:46.133 00:39:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.133 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.133 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.391 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.391 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.391 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.391 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.391 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.650 00:39:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.216 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.782 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.041 00:20:48.299 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.299 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.299 00:39:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.558 { 00:20:48.558 "auth": { 00:20:48.558 "dhgroup": "ffdhe6144", 00:20:48.558 "digest": "sha256", 00:20:48.558 "state": "completed" 00:20:48.558 }, 00:20:48.558 "cntlid": 39, 00:20:48.558 "listen_address": { 00:20:48.558 "adrfam": "IPv4", 00:20:48.558 "traddr": "10.0.0.2", 00:20:48.558 "trsvcid": "4420", 00:20:48.558 "trtype": "TCP" 00:20:48.558 }, 00:20:48.558 "peer_address": { 00:20:48.558 "adrfam": "IPv4", 00:20:48.558 "traddr": "10.0.0.1", 00:20:48.558 "trsvcid": "57006", 00:20:48.558 "trtype": "TCP" 00:20:48.558 }, 00:20:48.558 "qid": 0, 00:20:48.558 "state": "enabled", 00:20:48.558 "thread": "nvmf_tgt_poll_group_000" 00:20:48.558 } 00:20:48.558 ]' 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.558 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.816 00:39:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.752 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.012 00:39:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.579 00:20:50.579 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.579 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.579 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.837 { 00:20:50.837 "auth": { 00:20:50.837 "dhgroup": "ffdhe8192", 00:20:50.837 "digest": "sha256", 00:20:50.837 "state": "completed" 00:20:50.837 }, 00:20:50.837 "cntlid": 41, 00:20:50.837 "listen_address": { 00:20:50.837 "adrfam": "IPv4", 00:20:50.837 "traddr": "10.0.0.2", 00:20:50.837 "trsvcid": "4420", 00:20:50.837 "trtype": "TCP" 00:20:50.837 }, 00:20:50.837 "peer_address": { 00:20:50.837 "adrfam": "IPv4", 00:20:50.837 "traddr": "10.0.0.1", 00:20:50.837 "trsvcid": "51380", 00:20:50.837 "trtype": "TCP" 00:20:50.837 }, 00:20:50.837 "qid": 0, 00:20:50.837 "state": "enabled", 00:20:50.837 "thread": "nvmf_tgt_poll_group_000" 00:20:50.837 } 00:20:50.837 ]' 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.837 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.100 00:39:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.359 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.296 00:39:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.296 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.232 00:20:53.232 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.232 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.232 00:39:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.232 { 00:20:53.232 "auth": { 00:20:53.232 "dhgroup": "ffdhe8192", 00:20:53.232 "digest": "sha256", 00:20:53.232 "state": "completed" 00:20:53.232 }, 00:20:53.232 "cntlid": 43, 00:20:53.232 "listen_address": { 00:20:53.232 "adrfam": "IPv4", 00:20:53.232 "traddr": "10.0.0.2", 00:20:53.232 "trsvcid": "4420", 00:20:53.232 "trtype": "TCP" 00:20:53.232 }, 00:20:53.232 "peer_address": { 00:20:53.232 "adrfam": "IPv4", 00:20:53.232 "traddr": "10.0.0.1", 00:20:53.232 "trsvcid": "51402", 00:20:53.232 "trtype": "TCP" 00:20:53.232 }, 00:20:53.232 "qid": 0, 00:20:53.232 "state": "enabled", 00:20:53.232 "thread": "nvmf_tgt_poll_group_000" 00:20:53.232 } 00:20:53.232 ]' 00:20:53.232 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.491 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.750 00:39:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.317 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.576 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 00:39:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.835 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.835 00:39:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.403 00:20:55.403 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.403 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.403 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.662 { 00:20:55.662 "auth": { 00:20:55.662 "dhgroup": "ffdhe8192", 00:20:55.662 "digest": "sha256", 00:20:55.662 "state": "completed" 00:20:55.662 }, 00:20:55.662 "cntlid": 45, 00:20:55.662 "listen_address": { 00:20:55.662 "adrfam": "IPv4", 00:20:55.662 "traddr": "10.0.0.2", 00:20:55.662 "trsvcid": "4420", 00:20:55.662 "trtype": "TCP" 00:20:55.662 }, 00:20:55.662 "peer_address": { 00:20:55.662 "adrfam": "IPv4", 00:20:55.662 "traddr": "10.0.0.1", 00:20:55.662 "trsvcid": "51428", 00:20:55.662 "trtype": "TCP" 00:20:55.662 }, 00:20:55.662 "qid": 0, 00:20:55.662 "state": "enabled", 00:20:55.662 "thread": "nvmf_tgt_poll_group_000" 00:20:55.662 } 00:20:55.662 ]' 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.662 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.921 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.921 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.921 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.179 00:40:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.745 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.002 00:40:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.937 00:20:57.937 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.937 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.937 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.195 { 00:20:58.195 "auth": { 00:20:58.195 "dhgroup": "ffdhe8192", 00:20:58.195 "digest": "sha256", 00:20:58.195 "state": "completed" 00:20:58.195 }, 00:20:58.195 "cntlid": 47, 00:20:58.195 "listen_address": { 00:20:58.195 "adrfam": "IPv4", 00:20:58.195 "traddr": "10.0.0.2", 00:20:58.195 "trsvcid": "4420", 00:20:58.195 "trtype": "TCP" 00:20:58.195 }, 00:20:58.195 "peer_address": { 00:20:58.195 "adrfam": "IPv4", 00:20:58.195 "traddr": "10.0.0.1", 00:20:58.195 "trsvcid": "51464", 00:20:58.195 "trtype": "TCP" 00:20:58.195 }, 00:20:58.195 "qid": 0, 00:20:58.195 "state": "enabled", 00:20:58.195 "thread": "nvmf_tgt_poll_group_000" 00:20:58.195 } 00:20:58.195 ]' 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.195 00:40:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.195 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.195 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.195 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.195 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.195 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.453 00:40:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.388 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.646 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.904 00:20:59.904 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.904 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.904 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.163 { 00:21:00.163 "auth": { 00:21:00.163 "dhgroup": "null", 00:21:00.163 "digest": "sha384", 00:21:00.163 "state": "completed" 00:21:00.163 }, 00:21:00.163 "cntlid": 49, 00:21:00.163 "listen_address": { 00:21:00.163 "adrfam": "IPv4", 00:21:00.163 "traddr": "10.0.0.2", 00:21:00.163 "trsvcid": "4420", 00:21:00.163 "trtype": "TCP" 00:21:00.163 }, 00:21:00.163 "peer_address": { 00:21:00.163 "adrfam": "IPv4", 00:21:00.163 "traddr": "10.0.0.1", 00:21:00.163 "trsvcid": "52230", 00:21:00.163 "trtype": "TCP" 00:21:00.163 }, 00:21:00.163 "qid": 0, 00:21:00.163 "state": "enabled", 00:21:00.163 "thread": "nvmf_tgt_poll_group_000" 00:21:00.163 } 00:21:00.163 ]' 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.163 00:40:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.163 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.163 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.422 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.423 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.423 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.423 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:01.359 00:40:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.359 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.618 00:21:01.876 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.876 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.876 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.134 { 00:21:02.134 "auth": { 00:21:02.134 "dhgroup": "null", 00:21:02.134 "digest": "sha384", 00:21:02.134 "state": "completed" 00:21:02.134 }, 00:21:02.134 "cntlid": 51, 00:21:02.134 "listen_address": { 00:21:02.134 "adrfam": "IPv4", 00:21:02.134 "traddr": "10.0.0.2", 00:21:02.134 "trsvcid": "4420", 00:21:02.134 "trtype": "TCP" 00:21:02.134 }, 00:21:02.134 "peer_address": { 00:21:02.134 "adrfam": "IPv4", 00:21:02.134 "traddr": "10.0.0.1", 00:21:02.134 "trsvcid": "52274", 00:21:02.134 "trtype": "TCP" 00:21:02.134 }, 00:21:02.134 "qid": 0, 00:21:02.134 "state": "enabled", 00:21:02.134 "thread": "nvmf_tgt_poll_group_000" 00:21:02.134 } 00:21:02.134 ]' 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.134 00:40:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.392 00:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:03.326 00:40:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.326 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.585 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.880 00:21:03.880 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.880 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.880 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.137 { 00:21:04.137 "auth": { 00:21:04.137 "dhgroup": "null", 00:21:04.137 "digest": "sha384", 00:21:04.137 "state": "completed" 00:21:04.137 }, 00:21:04.137 "cntlid": 53, 00:21:04.137 "listen_address": { 00:21:04.137 "adrfam": "IPv4", 00:21:04.137 "traddr": "10.0.0.2", 00:21:04.137 "trsvcid": "4420", 00:21:04.137 "trtype": "TCP" 00:21:04.137 }, 00:21:04.137 "peer_address": { 00:21:04.137 "adrfam": "IPv4", 00:21:04.137 "traddr": "10.0.0.1", 00:21:04.137 "trsvcid": "52304", 00:21:04.137 "trtype": "TCP" 00:21:04.137 }, 00:21:04.137 "qid": 0, 00:21:04.137 "state": "enabled", 00:21:04.137 "thread": "nvmf_tgt_poll_group_000" 00:21:04.137 } 00:21:04.137 ]' 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.137 00:40:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.137 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:04.137 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.394 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.394 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.394 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.652 00:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.217 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.783 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.041 00:21:06.041 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.041 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.041 00:40:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.299 { 00:21:06.299 "auth": { 00:21:06.299 "dhgroup": "null", 00:21:06.299 "digest": "sha384", 00:21:06.299 "state": "completed" 00:21:06.299 }, 00:21:06.299 "cntlid": 55, 00:21:06.299 "listen_address": { 00:21:06.299 "adrfam": "IPv4", 00:21:06.299 "traddr": "10.0.0.2", 00:21:06.299 "trsvcid": "4420", 00:21:06.299 "trtype": "TCP" 00:21:06.299 }, 00:21:06.299 "peer_address": { 00:21:06.299 "adrfam": "IPv4", 00:21:06.299 "traddr": "10.0.0.1", 00:21:06.299 "trsvcid": "52332", 00:21:06.299 "trtype": "TCP" 00:21:06.299 }, 00:21:06.299 "qid": 0, 00:21:06.299 "state": "enabled", 00:21:06.299 "thread": "nvmf_tgt_poll_group_000" 00:21:06.299 } 00:21:06.299 ]' 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:06.299 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.557 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.557 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.557 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.815 00:40:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.381 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.640 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.207 00:21:08.207 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.207 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.207 00:40:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.207 { 00:21:08.207 "auth": { 00:21:08.207 "dhgroup": "ffdhe2048", 00:21:08.207 "digest": "sha384", 00:21:08.207 "state": "completed" 00:21:08.207 }, 00:21:08.207 "cntlid": 57, 00:21:08.207 "listen_address": { 00:21:08.207 "adrfam": "IPv4", 00:21:08.207 "traddr": "10.0.0.2", 00:21:08.207 "trsvcid": "4420", 00:21:08.207 "trtype": "TCP" 00:21:08.207 }, 00:21:08.207 "peer_address": { 00:21:08.207 "adrfam": "IPv4", 00:21:08.207 "traddr": "10.0.0.1", 00:21:08.207 "trsvcid": "52368", 00:21:08.207 "trtype": "TCP" 00:21:08.207 }, 00:21:08.207 "qid": 0, 00:21:08.207 "state": "enabled", 00:21:08.207 "thread": "nvmf_tgt_poll_group_000" 00:21:08.207 } 00:21:08.207 ]' 00:21:08.207 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.464 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.722 00:40:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.288 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.547 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.806 00:40:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.806 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.806 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.063 00:21:10.063 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.063 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.063 00:40:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.321 { 00:21:10.321 "auth": { 00:21:10.321 "dhgroup": "ffdhe2048", 00:21:10.321 "digest": "sha384", 00:21:10.321 "state": "completed" 00:21:10.321 }, 00:21:10.321 "cntlid": 59, 00:21:10.321 "listen_address": { 00:21:10.321 "adrfam": "IPv4", 00:21:10.321 "traddr": "10.0.0.2", 00:21:10.321 "trsvcid": "4420", 00:21:10.321 "trtype": "TCP" 00:21:10.321 }, 00:21:10.321 "peer_address": { 00:21:10.321 "adrfam": "IPv4", 00:21:10.321 "traddr": "10.0.0.1", 00:21:10.321 "trsvcid": "40208", 00:21:10.321 "trtype": "TCP" 00:21:10.321 }, 00:21:10.321 "qid": 0, 00:21:10.321 "state": "enabled", 00:21:10.321 "thread": "nvmf_tgt_poll_group_000" 00:21:10.321 } 00:21:10.321 ]' 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.321 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.579 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.579 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.579 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.836 00:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.768 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.026 00:21:12.284 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.284 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.284 00:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.543 { 00:21:12.543 "auth": { 00:21:12.543 "dhgroup": "ffdhe2048", 00:21:12.543 "digest": "sha384", 00:21:12.543 "state": "completed" 00:21:12.543 }, 00:21:12.543 "cntlid": 61, 00:21:12.543 "listen_address": { 00:21:12.543 "adrfam": "IPv4", 00:21:12.543 "traddr": "10.0.0.2", 00:21:12.543 "trsvcid": "4420", 00:21:12.543 "trtype": "TCP" 00:21:12.543 }, 00:21:12.543 "peer_address": { 00:21:12.543 "adrfam": "IPv4", 00:21:12.543 "traddr": "10.0.0.1", 00:21:12.543 "trsvcid": "40238", 00:21:12.543 "trtype": "TCP" 00:21:12.543 }, 00:21:12.543 "qid": 0, 00:21:12.543 "state": "enabled", 00:21:12.543 "thread": "nvmf_tgt_poll_group_000" 00:21:12.543 } 00:21:12.543 ]' 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.543 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.109 00:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:13.672 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.673 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.930 00:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.187 00:21:14.187 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.187 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.187 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.445 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.445 { 00:21:14.445 "auth": { 00:21:14.445 "dhgroup": "ffdhe2048", 00:21:14.445 "digest": "sha384", 00:21:14.445 "state": "completed" 00:21:14.445 }, 00:21:14.445 "cntlid": 63, 00:21:14.445 "listen_address": { 00:21:14.445 "adrfam": "IPv4", 00:21:14.445 "traddr": "10.0.0.2", 00:21:14.445 "trsvcid": "4420", 00:21:14.445 "trtype": "TCP" 00:21:14.445 }, 00:21:14.445 "peer_address": { 00:21:14.445 "adrfam": "IPv4", 00:21:14.445 "traddr": "10.0.0.1", 00:21:14.445 "trsvcid": "40266", 00:21:14.445 "trtype": "TCP" 00:21:14.445 }, 00:21:14.445 "qid": 0, 00:21:14.445 "state": "enabled", 00:21:14.445 "thread": "nvmf_tgt_poll_group_000" 00:21:14.445 } 00:21:14.445 ]' 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.704 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.961 00:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:15.894 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.894 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:15.894 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.895 00:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.468 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.468 { 00:21:16.468 "auth": { 00:21:16.468 "dhgroup": "ffdhe3072", 00:21:16.468 "digest": "sha384", 00:21:16.468 "state": "completed" 00:21:16.468 }, 00:21:16.468 "cntlid": 65, 00:21:16.468 "listen_address": { 00:21:16.468 "adrfam": "IPv4", 00:21:16.468 "traddr": "10.0.0.2", 00:21:16.468 "trsvcid": "4420", 00:21:16.468 "trtype": "TCP" 00:21:16.468 }, 00:21:16.468 "peer_address": { 00:21:16.468 "adrfam": "IPv4", 00:21:16.468 "traddr": "10.0.0.1", 00:21:16.468 "trsvcid": "40286", 00:21:16.468 "trtype": "TCP" 00:21:16.468 }, 00:21:16.468 "qid": 0, 00:21:16.468 "state": "enabled", 00:21:16.468 "thread": "nvmf_tgt_poll_group_000" 00:21:16.468 } 00:21:16.468 ]' 00:21:16.468 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.725 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.983 00:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.916 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.173 00:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.430 00:21:18.430 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.430 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.430 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.688 { 00:21:18.688 "auth": { 00:21:18.688 "dhgroup": "ffdhe3072", 00:21:18.688 "digest": "sha384", 00:21:18.688 "state": "completed" 00:21:18.688 }, 00:21:18.688 "cntlid": 67, 00:21:18.688 "listen_address": { 00:21:18.688 "adrfam": "IPv4", 00:21:18.688 "traddr": "10.0.0.2", 00:21:18.688 "trsvcid": "4420", 00:21:18.688 "trtype": "TCP" 00:21:18.688 }, 00:21:18.688 "peer_address": { 00:21:18.688 "adrfam": "IPv4", 00:21:18.688 "traddr": "10.0.0.1", 00:21:18.688 "trsvcid": "40316", 00:21:18.688 "trtype": "TCP" 00:21:18.688 }, 00:21:18.688 "qid": 0, 00:21:18.688 "state": "enabled", 00:21:18.688 "thread": "nvmf_tgt_poll_group_000" 00:21:18.688 } 00:21:18.688 ]' 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.688 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.945 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.945 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.945 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.203 00:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.769 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.027 00:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.284 00:21:20.284 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.284 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.284 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.542 { 00:21:20.542 "auth": { 00:21:20.542 "dhgroup": "ffdhe3072", 00:21:20.542 "digest": "sha384", 00:21:20.542 "state": "completed" 00:21:20.542 }, 00:21:20.542 "cntlid": 69, 00:21:20.542 "listen_address": { 00:21:20.542 "adrfam": "IPv4", 00:21:20.542 "traddr": "10.0.0.2", 00:21:20.542 "trsvcid": "4420", 00:21:20.542 "trtype": "TCP" 00:21:20.542 }, 00:21:20.542 "peer_address": { 00:21:20.542 "adrfam": "IPv4", 00:21:20.542 "traddr": "10.0.0.1", 00:21:20.542 "trsvcid": "33232", 00:21:20.542 "trtype": "TCP" 00:21:20.542 }, 00:21:20.542 "qid": 0, 00:21:20.542 "state": "enabled", 00:21:20.542 "thread": "nvmf_tgt_poll_group_000" 00:21:20.542 } 00:21:20.542 ]' 00:21:20.542 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.799 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.057 00:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.994 00:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.561 00:21:22.561 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.561 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.561 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.819 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.819 { 00:21:22.819 "auth": { 00:21:22.819 "dhgroup": "ffdhe3072", 00:21:22.819 "digest": "sha384", 00:21:22.819 "state": "completed" 00:21:22.820 }, 00:21:22.820 "cntlid": 71, 00:21:22.820 "listen_address": { 00:21:22.820 "adrfam": "IPv4", 00:21:22.820 "traddr": "10.0.0.2", 00:21:22.820 "trsvcid": "4420", 00:21:22.820 "trtype": "TCP" 00:21:22.820 }, 00:21:22.820 "peer_address": { 00:21:22.820 "adrfam": "IPv4", 00:21:22.820 "traddr": "10.0.0.1", 00:21:22.820 "trsvcid": "33260", 00:21:22.820 "trtype": "TCP" 00:21:22.820 }, 00:21:22.820 "qid": 0, 00:21:22.820 "state": "enabled", 00:21:22.820 "thread": "nvmf_tgt_poll_group_000" 00:21:22.820 } 00:21:22.820 ]' 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.820 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.078 00:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.014 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.273 00:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.533 00:21:24.533 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.533 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.533 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.793 { 00:21:24.793 "auth": { 00:21:24.793 "dhgroup": "ffdhe4096", 00:21:24.793 "digest": "sha384", 00:21:24.793 "state": "completed" 00:21:24.793 }, 00:21:24.793 "cntlid": 73, 00:21:24.793 "listen_address": { 00:21:24.793 "adrfam": "IPv4", 00:21:24.793 "traddr": "10.0.0.2", 00:21:24.793 "trsvcid": "4420", 00:21:24.793 "trtype": "TCP" 00:21:24.793 }, 00:21:24.793 "peer_address": { 00:21:24.793 "adrfam": "IPv4", 00:21:24.793 "traddr": "10.0.0.1", 00:21:24.793 "trsvcid": "33290", 00:21:24.793 "trtype": "TCP" 00:21:24.793 }, 00:21:24.793 "qid": 0, 00:21:24.793 "state": "enabled", 00:21:24.793 "thread": "nvmf_tgt_poll_group_000" 00:21:24.793 } 00:21:24.793 ]' 00:21:24.793 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.051 00:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.309 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.245 00:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.245 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.504 00:21:26.763 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.763 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.763 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.022 { 00:21:27.022 "auth": { 00:21:27.022 "dhgroup": "ffdhe4096", 00:21:27.022 "digest": "sha384", 00:21:27.022 "state": "completed" 00:21:27.022 }, 00:21:27.022 "cntlid": 75, 00:21:27.022 "listen_address": { 00:21:27.022 "adrfam": "IPv4", 00:21:27.022 "traddr": "10.0.0.2", 00:21:27.022 "trsvcid": "4420", 00:21:27.022 "trtype": "TCP" 00:21:27.022 }, 00:21:27.022 "peer_address": { 00:21:27.022 "adrfam": "IPv4", 00:21:27.022 "traddr": "10.0.0.1", 00:21:27.022 "trsvcid": "33324", 00:21:27.022 "trtype": "TCP" 00:21:27.022 }, 00:21:27.022 "qid": 0, 00:21:27.022 "state": "enabled", 00:21:27.022 "thread": "nvmf_tgt_poll_group_000" 00:21:27.022 } 00:21:27.022 ]' 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.022 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.281 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.281 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.281 00:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.540 00:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:28.107 00:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.107 00:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:28.107 00:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.107 00:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.107 00:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.107 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.107 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.107 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.365 00:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.623 00:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.623 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.623 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.881 00:21:28.881 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.881 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.881 00:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.139 { 00:21:29.139 "auth": { 00:21:29.139 "dhgroup": "ffdhe4096", 00:21:29.139 "digest": "sha384", 00:21:29.139 "state": "completed" 00:21:29.139 }, 00:21:29.139 "cntlid": 77, 00:21:29.139 "listen_address": { 00:21:29.139 "adrfam": "IPv4", 00:21:29.139 "traddr": "10.0.0.2", 00:21:29.139 "trsvcid": "4420", 00:21:29.139 "trtype": "TCP" 00:21:29.139 }, 00:21:29.139 "peer_address": { 00:21:29.139 "adrfam": "IPv4", 00:21:29.139 "traddr": "10.0.0.1", 00:21:29.139 "trsvcid": "33350", 00:21:29.139 "trtype": "TCP" 00:21:29.139 }, 00:21:29.139 "qid": 0, 00:21:29.139 "state": "enabled", 00:21:29.139 "thread": "nvmf_tgt_poll_group_000" 00:21:29.139 } 00:21:29.139 ]' 00:21:29.139 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.398 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.658 00:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.593 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.852 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.110 00:21:31.110 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.110 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.110 00:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.368 { 00:21:31.368 "auth": { 00:21:31.368 "dhgroup": "ffdhe4096", 00:21:31.368 "digest": "sha384", 00:21:31.368 "state": "completed" 00:21:31.368 }, 00:21:31.368 "cntlid": 79, 00:21:31.368 "listen_address": { 00:21:31.368 "adrfam": "IPv4", 00:21:31.368 "traddr": "10.0.0.2", 00:21:31.368 "trsvcid": "4420", 00:21:31.368 "trtype": "TCP" 00:21:31.368 }, 00:21:31.368 "peer_address": { 00:21:31.368 "adrfam": "IPv4", 00:21:31.368 "traddr": "10.0.0.1", 00:21:31.368 "trsvcid": "47572", 00:21:31.368 "trtype": "TCP" 00:21:31.368 }, 00:21:31.368 "qid": 0, 00:21:31.368 "state": "enabled", 00:21:31.368 "thread": "nvmf_tgt_poll_group_000" 00:21:31.368 } 00:21:31.368 ]' 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.368 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.628 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.887 00:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:32.455 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.455 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:32.455 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.455 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.714 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.714 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.714 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.714 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.714 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.973 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.974 00:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.974 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.974 00:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.232 00:21:33.492 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.492 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.492 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.751 { 00:21:33.751 "auth": { 00:21:33.751 "dhgroup": "ffdhe6144", 00:21:33.751 "digest": "sha384", 00:21:33.751 "state": "completed" 00:21:33.751 }, 00:21:33.751 "cntlid": 81, 00:21:33.751 "listen_address": { 00:21:33.751 "adrfam": "IPv4", 00:21:33.751 "traddr": "10.0.0.2", 00:21:33.751 "trsvcid": "4420", 00:21:33.751 "trtype": "TCP" 00:21:33.751 }, 00:21:33.751 "peer_address": { 00:21:33.751 "adrfam": "IPv4", 00:21:33.751 "traddr": "10.0.0.1", 00:21:33.751 "trsvcid": "47610", 00:21:33.751 "trtype": "TCP" 00:21:33.751 }, 00:21:33.751 "qid": 0, 00:21:33.751 "state": "enabled", 00:21:33.751 "thread": "nvmf_tgt_poll_group_000" 00:21:33.751 } 00:21:33.751 ]' 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.751 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.010 00:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:34.946 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.205 00:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.463 00:21:35.720 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.721 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.721 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.978 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.979 { 00:21:35.979 "auth": { 00:21:35.979 "dhgroup": "ffdhe6144", 00:21:35.979 "digest": "sha384", 00:21:35.979 "state": "completed" 00:21:35.979 }, 00:21:35.979 "cntlid": 83, 00:21:35.979 "listen_address": { 00:21:35.979 "adrfam": "IPv4", 00:21:35.979 "traddr": "10.0.0.2", 00:21:35.979 "trsvcid": "4420", 00:21:35.979 "trtype": "TCP" 00:21:35.979 }, 00:21:35.979 "peer_address": { 00:21:35.979 "adrfam": "IPv4", 00:21:35.979 "traddr": "10.0.0.1", 00:21:35.979 "trsvcid": "47634", 00:21:35.979 "trtype": "TCP" 00:21:35.979 }, 00:21:35.979 "qid": 0, 00:21:35.979 "state": "enabled", 00:21:35.979 "thread": "nvmf_tgt_poll_group_000" 00:21:35.979 } 00:21:35.979 ]' 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.979 00:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.236 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.171 00:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.430 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.687 00:21:37.687 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.687 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.687 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.944 { 00:21:37.944 "auth": { 00:21:37.944 "dhgroup": "ffdhe6144", 00:21:37.944 "digest": "sha384", 00:21:37.944 "state": "completed" 00:21:37.944 }, 00:21:37.944 "cntlid": 85, 00:21:37.944 "listen_address": { 00:21:37.944 "adrfam": "IPv4", 00:21:37.944 "traddr": "10.0.0.2", 00:21:37.944 "trsvcid": "4420", 00:21:37.944 "trtype": "TCP" 00:21:37.944 }, 00:21:37.944 "peer_address": { 00:21:37.944 "adrfam": "IPv4", 00:21:37.944 "traddr": "10.0.0.1", 00:21:37.944 "trsvcid": "47662", 00:21:37.944 "trtype": "TCP" 00:21:37.944 }, 00:21:37.944 "qid": 0, 00:21:37.944 "state": "enabled", 00:21:37.944 "thread": "nvmf_tgt_poll_group_000" 00:21:37.944 } 00:21:37.944 ]' 00:21:37.944 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.201 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.201 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.201 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.201 00:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.201 00:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.201 00:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.201 00:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.457 00:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.385 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.642 00:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.642 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.642 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.899 00:21:39.899 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.899 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.899 00:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.465 { 00:21:40.465 "auth": { 00:21:40.465 "dhgroup": "ffdhe6144", 00:21:40.465 "digest": "sha384", 00:21:40.465 "state": "completed" 00:21:40.465 }, 00:21:40.465 "cntlid": 87, 00:21:40.465 "listen_address": { 00:21:40.465 "adrfam": "IPv4", 00:21:40.465 "traddr": "10.0.0.2", 00:21:40.465 "trsvcid": "4420", 00:21:40.465 "trtype": "TCP" 00:21:40.465 }, 00:21:40.465 "peer_address": { 00:21:40.465 "adrfam": "IPv4", 00:21:40.465 "traddr": "10.0.0.1", 00:21:40.465 "trsvcid": "56564", 00:21:40.465 "trtype": "TCP" 00:21:40.465 }, 00:21:40.465 "qid": 0, 00:21:40.465 "state": "enabled", 00:21:40.465 "thread": "nvmf_tgt_poll_group_000" 00:21:40.465 } 00:21:40.465 ]' 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.465 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.723 00:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.739 00:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.307 00:21:42.565 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.565 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.565 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.824 { 00:21:42.824 "auth": { 00:21:42.824 "dhgroup": "ffdhe8192", 00:21:42.824 "digest": "sha384", 00:21:42.824 "state": "completed" 00:21:42.824 }, 00:21:42.824 "cntlid": 89, 00:21:42.824 "listen_address": { 00:21:42.824 "adrfam": "IPv4", 00:21:42.824 "traddr": "10.0.0.2", 00:21:42.824 "trsvcid": "4420", 00:21:42.824 "trtype": "TCP" 00:21:42.824 }, 00:21:42.824 "peer_address": { 00:21:42.824 "adrfam": "IPv4", 00:21:42.824 "traddr": "10.0.0.1", 00:21:42.824 "trsvcid": "56588", 00:21:42.824 "trtype": "TCP" 00:21:42.824 }, 00:21:42.824 "qid": 0, 00:21:42.824 "state": "enabled", 00:21:42.824 "thread": "nvmf_tgt_poll_group_000" 00:21:42.824 } 00:21:42.824 ]' 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.824 00:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.393 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:43.960 00:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.220 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.155 00:21:45.155 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.155 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.155 00:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.155 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.155 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.155 00:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.155 00:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.414 { 00:21:45.414 "auth": { 00:21:45.414 "dhgroup": "ffdhe8192", 00:21:45.414 "digest": "sha384", 00:21:45.414 "state": "completed" 00:21:45.414 }, 00:21:45.414 "cntlid": 91, 00:21:45.414 "listen_address": { 00:21:45.414 "adrfam": "IPv4", 00:21:45.414 "traddr": "10.0.0.2", 00:21:45.414 "trsvcid": "4420", 00:21:45.414 "trtype": "TCP" 00:21:45.414 }, 00:21:45.414 "peer_address": { 00:21:45.414 "adrfam": "IPv4", 00:21:45.414 "traddr": "10.0.0.1", 00:21:45.414 "trsvcid": "56618", 00:21:45.414 "trtype": "TCP" 00:21:45.414 }, 00:21:45.414 "qid": 0, 00:21:45.414 "state": "enabled", 00:21:45.414 "thread": "nvmf_tgt_poll_group_000" 00:21:45.414 } 00:21:45.414 ]' 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.414 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.673 00:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:46.608 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.866 00:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.432 00:21:47.432 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.432 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.432 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.689 { 00:21:47.689 "auth": { 00:21:47.689 "dhgroup": "ffdhe8192", 00:21:47.689 "digest": "sha384", 00:21:47.689 "state": "completed" 00:21:47.689 }, 00:21:47.689 "cntlid": 93, 00:21:47.689 "listen_address": { 00:21:47.689 "adrfam": "IPv4", 00:21:47.689 "traddr": "10.0.0.2", 00:21:47.689 "trsvcid": "4420", 00:21:47.689 "trtype": "TCP" 00:21:47.689 }, 00:21:47.689 "peer_address": { 00:21:47.689 "adrfam": "IPv4", 00:21:47.689 "traddr": "10.0.0.1", 00:21:47.689 "trsvcid": "56630", 00:21:47.689 "trtype": "TCP" 00:21:47.689 }, 00:21:47.689 "qid": 0, 00:21:47.689 "state": "enabled", 00:21:47.689 "thread": "nvmf_tgt_poll_group_000" 00:21:47.689 } 00:21:47.689 ]' 00:21:47.689 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.946 00:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.234 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:48.807 00:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.373 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.941 00:21:49.941 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.941 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.941 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.200 { 00:21:50.200 "auth": { 00:21:50.200 "dhgroup": "ffdhe8192", 00:21:50.200 "digest": "sha384", 00:21:50.200 "state": "completed" 00:21:50.200 }, 00:21:50.200 "cntlid": 95, 00:21:50.200 "listen_address": { 00:21:50.200 "adrfam": "IPv4", 00:21:50.200 "traddr": "10.0.0.2", 00:21:50.200 "trsvcid": "4420", 00:21:50.200 "trtype": "TCP" 00:21:50.200 }, 00:21:50.200 "peer_address": { 00:21:50.200 "adrfam": "IPv4", 00:21:50.200 "traddr": "10.0.0.1", 00:21:50.200 "trsvcid": "58714", 00:21:50.200 "trtype": "TCP" 00:21:50.200 }, 00:21:50.200 "qid": 0, 00:21:50.200 "state": "enabled", 00:21:50.200 "thread": "nvmf_tgt_poll_group_000" 00:21:50.200 } 00:21:50.200 ]' 00:21:50.200 00:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.200 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.200 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.200 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.200 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.512 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.512 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.512 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.770 00:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:51.338 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.597 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.855 00:21:51.855 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.855 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.855 00:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.113 { 00:21:52.113 "auth": { 00:21:52.113 "dhgroup": "null", 00:21:52.113 "digest": "sha512", 00:21:52.113 "state": "completed" 00:21:52.113 }, 00:21:52.113 "cntlid": 97, 00:21:52.113 "listen_address": { 00:21:52.113 "adrfam": "IPv4", 00:21:52.113 "traddr": "10.0.0.2", 00:21:52.113 "trsvcid": "4420", 00:21:52.113 "trtype": "TCP" 00:21:52.113 }, 00:21:52.113 "peer_address": { 00:21:52.113 "adrfam": "IPv4", 00:21:52.113 "traddr": "10.0.0.1", 00:21:52.113 "trsvcid": "58748", 00:21:52.113 "trtype": "TCP" 00:21:52.113 }, 00:21:52.113 "qid": 0, 00:21:52.113 "state": "enabled", 00:21:52.113 "thread": "nvmf_tgt_poll_group_000" 00:21:52.113 } 00:21:52.113 ]' 00:21:52.113 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.371 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.372 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.630 00:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:53.565 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.566 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.825 00:21:53.825 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.825 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.825 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.083 { 00:21:54.083 "auth": { 00:21:54.083 "dhgroup": "null", 00:21:54.083 "digest": "sha512", 00:21:54.083 "state": "completed" 00:21:54.083 }, 00:21:54.083 "cntlid": 99, 00:21:54.083 "listen_address": { 00:21:54.083 "adrfam": "IPv4", 00:21:54.083 "traddr": "10.0.0.2", 00:21:54.083 "trsvcid": "4420", 00:21:54.083 "trtype": "TCP" 00:21:54.083 }, 00:21:54.083 "peer_address": { 00:21:54.083 "adrfam": "IPv4", 00:21:54.083 "traddr": "10.0.0.1", 00:21:54.083 "trsvcid": "58764", 00:21:54.083 "trtype": "TCP" 00:21:54.083 }, 00:21:54.083 "qid": 0, 00:21:54.083 "state": "enabled", 00:21:54.083 "thread": "nvmf_tgt_poll_group_000" 00:21:54.083 } 00:21:54.083 ]' 00:21:54.083 00:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.341 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.600 00:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:21:55.175 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.175 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:55.175 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.175 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.433 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.434 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.434 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.434 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.692 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.950 00:21:55.950 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.950 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.950 00:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.208 { 00:21:56.208 "auth": { 00:21:56.208 "dhgroup": "null", 00:21:56.208 "digest": "sha512", 00:21:56.208 "state": "completed" 00:21:56.208 }, 00:21:56.208 "cntlid": 101, 00:21:56.208 "listen_address": { 00:21:56.208 "adrfam": "IPv4", 00:21:56.208 "traddr": "10.0.0.2", 00:21:56.208 "trsvcid": "4420", 00:21:56.208 "trtype": "TCP" 00:21:56.208 }, 00:21:56.208 "peer_address": { 00:21:56.208 "adrfam": "IPv4", 00:21:56.208 "traddr": "10.0.0.1", 00:21:56.208 "trsvcid": "58798", 00:21:56.208 "trtype": "TCP" 00:21:56.208 }, 00:21:56.208 "qid": 0, 00:21:56.208 "state": "enabled", 00:21:56.208 "thread": "nvmf_tgt_poll_group_000" 00:21:56.208 } 00:21:56.208 ]' 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.208 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.467 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:56.467 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.467 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.467 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.467 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.726 00:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.663 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.239 00:21:58.239 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.239 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.239 00:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.239 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.240 { 00:21:58.240 "auth": { 00:21:58.240 "dhgroup": "null", 00:21:58.240 "digest": "sha512", 00:21:58.240 "state": "completed" 00:21:58.240 }, 00:21:58.240 "cntlid": 103, 00:21:58.240 "listen_address": { 00:21:58.240 "adrfam": "IPv4", 00:21:58.240 "traddr": "10.0.0.2", 00:21:58.240 "trsvcid": "4420", 00:21:58.240 "trtype": "TCP" 00:21:58.240 }, 00:21:58.240 "peer_address": { 00:21:58.240 "adrfam": "IPv4", 00:21:58.240 "traddr": "10.0.0.1", 00:21:58.240 "trsvcid": "58824", 00:21:58.240 "trtype": "TCP" 00:21:58.240 }, 00:21:58.240 "qid": 0, 00:21:58.240 "state": "enabled", 00:21:58.240 "thread": "nvmf_tgt_poll_group_000" 00:21:58.240 } 00:21:58.240 ]' 00:21:58.240 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.501 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.761 00:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:59.764 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.023 00:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.280 00:22:00.280 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.280 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.280 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.539 { 00:22:00.539 "auth": { 00:22:00.539 "dhgroup": "ffdhe2048", 00:22:00.539 "digest": "sha512", 00:22:00.539 "state": "completed" 00:22:00.539 }, 00:22:00.539 "cntlid": 105, 00:22:00.539 "listen_address": { 00:22:00.539 "adrfam": "IPv4", 00:22:00.539 "traddr": "10.0.0.2", 00:22:00.539 "trsvcid": "4420", 00:22:00.539 "trtype": "TCP" 00:22:00.539 }, 00:22:00.539 "peer_address": { 00:22:00.539 "adrfam": "IPv4", 00:22:00.539 "traddr": "10.0.0.1", 00:22:00.539 "trsvcid": "56322", 00:22:00.539 "trtype": "TCP" 00:22:00.539 }, 00:22:00.539 "qid": 0, 00:22:00.539 "state": "enabled", 00:22:00.539 "thread": "nvmf_tgt_poll_group_000" 00:22:00.539 } 00:22:00.539 ]' 00:22:00.539 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.797 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.056 00:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:01.992 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.250 00:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.509 00:22:02.509 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.509 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.509 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.767 { 00:22:02.767 "auth": { 00:22:02.767 "dhgroup": "ffdhe2048", 00:22:02.767 "digest": "sha512", 00:22:02.767 "state": "completed" 00:22:02.767 }, 00:22:02.767 "cntlid": 107, 00:22:02.767 "listen_address": { 00:22:02.767 "adrfam": "IPv4", 00:22:02.767 "traddr": "10.0.0.2", 00:22:02.767 "trsvcid": "4420", 00:22:02.767 "trtype": "TCP" 00:22:02.767 }, 00:22:02.767 "peer_address": { 00:22:02.767 "adrfam": "IPv4", 00:22:02.767 "traddr": "10.0.0.1", 00:22:02.767 "trsvcid": "56346", 00:22:02.767 "trtype": "TCP" 00:22:02.767 }, 00:22:02.767 "qid": 0, 00:22:02.767 "state": "enabled", 00:22:02.767 "thread": "nvmf_tgt_poll_group_000" 00:22:02.767 } 00:22:02.767 ]' 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:02.767 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.026 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.026 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.026 00:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.284 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.852 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.110 00:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.369 00:22:04.627 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.627 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.627 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.886 { 00:22:04.886 "auth": { 00:22:04.886 "dhgroup": "ffdhe2048", 00:22:04.886 "digest": "sha512", 00:22:04.886 "state": "completed" 00:22:04.886 }, 00:22:04.886 "cntlid": 109, 00:22:04.886 "listen_address": { 00:22:04.886 "adrfam": "IPv4", 00:22:04.886 "traddr": "10.0.0.2", 00:22:04.886 "trsvcid": "4420", 00:22:04.886 "trtype": "TCP" 00:22:04.886 }, 00:22:04.886 "peer_address": { 00:22:04.886 "adrfam": "IPv4", 00:22:04.886 "traddr": "10.0.0.1", 00:22:04.886 "trsvcid": "56372", 00:22:04.886 "trtype": "TCP" 00:22:04.886 }, 00:22:04.886 "qid": 0, 00:22:04.886 "state": "enabled", 00:22:04.886 "thread": "nvmf_tgt_poll_group_000" 00:22:04.886 } 00:22:04.886 ]' 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.886 00:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.454 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.025 00:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.284 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.543 00:22:06.543 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.543 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.543 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.801 { 00:22:06.801 "auth": { 00:22:06.801 "dhgroup": "ffdhe2048", 00:22:06.801 "digest": "sha512", 00:22:06.801 "state": "completed" 00:22:06.801 }, 00:22:06.801 "cntlid": 111, 00:22:06.801 "listen_address": { 00:22:06.801 "adrfam": "IPv4", 00:22:06.801 "traddr": "10.0.0.2", 00:22:06.801 "trsvcid": "4420", 00:22:06.801 "trtype": "TCP" 00:22:06.801 }, 00:22:06.801 "peer_address": { 00:22:06.801 "adrfam": "IPv4", 00:22:06.801 "traddr": "10.0.0.1", 00:22:06.801 "trsvcid": "56400", 00:22:06.801 "trtype": "TCP" 00:22:06.801 }, 00:22:06.801 "qid": 0, 00:22:06.801 "state": "enabled", 00:22:06.801 "thread": "nvmf_tgt_poll_group_000" 00:22:06.801 } 00:22:06.801 ]' 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.801 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.060 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.060 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.060 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.060 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.060 00:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.319 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:07.899 00:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.158 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.159 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.159 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.416 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.416 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.416 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.674 00:22:08.674 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.674 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.674 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.931 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.931 { 00:22:08.931 "auth": { 00:22:08.931 "dhgroup": "ffdhe3072", 00:22:08.931 "digest": "sha512", 00:22:08.931 "state": "completed" 00:22:08.931 }, 00:22:08.931 "cntlid": 113, 00:22:08.932 "listen_address": { 00:22:08.932 "adrfam": "IPv4", 00:22:08.932 "traddr": "10.0.0.2", 00:22:08.932 "trsvcid": "4420", 00:22:08.932 "trtype": "TCP" 00:22:08.932 }, 00:22:08.932 "peer_address": { 00:22:08.932 "adrfam": "IPv4", 00:22:08.932 "traddr": "10.0.0.1", 00:22:08.932 "trsvcid": "56420", 00:22:08.932 "trtype": "TCP" 00:22:08.932 }, 00:22:08.932 "qid": 0, 00:22:08.932 "state": "enabled", 00:22:08.932 "thread": "nvmf_tgt_poll_group_000" 00:22:08.932 } 00:22:08.932 ]' 00:22:08.932 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.932 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.932 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.932 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.932 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.189 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.190 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.190 00:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.448 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:10.014 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.274 00:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.274 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.532 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.532 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.532 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.790 00:22:10.790 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.790 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.790 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.048 { 00:22:11.048 "auth": { 00:22:11.048 "dhgroup": "ffdhe3072", 00:22:11.048 "digest": "sha512", 00:22:11.048 "state": "completed" 00:22:11.048 }, 00:22:11.048 "cntlid": 115, 00:22:11.048 "listen_address": { 00:22:11.048 "adrfam": "IPv4", 00:22:11.048 "traddr": "10.0.0.2", 00:22:11.048 "trsvcid": "4420", 00:22:11.048 "trtype": "TCP" 00:22:11.048 }, 00:22:11.048 "peer_address": { 00:22:11.048 "adrfam": "IPv4", 00:22:11.048 "traddr": "10.0.0.1", 00:22:11.048 "trsvcid": "38698", 00:22:11.048 "trtype": "TCP" 00:22:11.048 }, 00:22:11.048 "qid": 0, 00:22:11.048 "state": "enabled", 00:22:11.048 "thread": "nvmf_tgt_poll_group_000" 00:22:11.048 } 00:22:11.048 ]' 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.048 00:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.306 00:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.306 00:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.306 00:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.565 00:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.500 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.068 00:22:13.068 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.068 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.068 00:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.326 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.326 { 00:22:13.326 "auth": { 00:22:13.326 "dhgroup": "ffdhe3072", 00:22:13.327 "digest": "sha512", 00:22:13.327 "state": "completed" 00:22:13.327 }, 00:22:13.327 "cntlid": 117, 00:22:13.327 "listen_address": { 00:22:13.327 "adrfam": "IPv4", 00:22:13.327 "traddr": "10.0.0.2", 00:22:13.327 "trsvcid": "4420", 00:22:13.327 "trtype": "TCP" 00:22:13.327 }, 00:22:13.327 "peer_address": { 00:22:13.327 "adrfam": "IPv4", 00:22:13.327 "traddr": "10.0.0.1", 00:22:13.327 "trsvcid": "38720", 00:22:13.327 "trtype": "TCP" 00:22:13.327 }, 00:22:13.327 "qid": 0, 00:22:13.327 "state": "enabled", 00:22:13.327 "thread": "nvmf_tgt_poll_group_000" 00:22:13.327 } 00:22:13.327 ]' 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.327 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.585 00:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:22:14.520 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.521 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.780 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.038 00:22:15.038 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.038 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.038 00:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.297 { 00:22:15.297 "auth": { 00:22:15.297 "dhgroup": "ffdhe3072", 00:22:15.297 "digest": "sha512", 00:22:15.297 "state": "completed" 00:22:15.297 }, 00:22:15.297 "cntlid": 119, 00:22:15.297 "listen_address": { 00:22:15.297 "adrfam": "IPv4", 00:22:15.297 "traddr": "10.0.0.2", 00:22:15.297 "trsvcid": "4420", 00:22:15.297 "trtype": "TCP" 00:22:15.297 }, 00:22:15.297 "peer_address": { 00:22:15.297 "adrfam": "IPv4", 00:22:15.297 "traddr": "10.0.0.1", 00:22:15.297 "trsvcid": "38746", 00:22:15.297 "trtype": "TCP" 00:22:15.297 }, 00:22:15.297 "qid": 0, 00:22:15.297 "state": "enabled", 00:22:15.297 "thread": "nvmf_tgt_poll_group_000" 00:22:15.297 } 00:22:15.297 ]' 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.297 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.555 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:15.555 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.555 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.555 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.556 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.814 00:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.433 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.692 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.257 00:22:17.257 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.257 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.257 00:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.516 { 00:22:17.516 "auth": { 00:22:17.516 "dhgroup": "ffdhe4096", 00:22:17.516 "digest": "sha512", 00:22:17.516 "state": "completed" 00:22:17.516 }, 00:22:17.516 "cntlid": 121, 00:22:17.516 "listen_address": { 00:22:17.516 "adrfam": "IPv4", 00:22:17.516 "traddr": "10.0.0.2", 00:22:17.516 "trsvcid": "4420", 00:22:17.516 "trtype": "TCP" 00:22:17.516 }, 00:22:17.516 "peer_address": { 00:22:17.516 "adrfam": "IPv4", 00:22:17.516 "traddr": "10.0.0.1", 00:22:17.516 "trsvcid": "38778", 00:22:17.516 "trtype": "TCP" 00:22:17.516 }, 00:22:17.516 "qid": 0, 00:22:17.516 "state": "enabled", 00:22:17.516 "thread": "nvmf_tgt_poll_group_000" 00:22:17.516 } 00:22:17.516 ]' 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.516 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.775 00:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.711 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.969 00:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.228 00:22:19.228 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.228 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.228 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.485 { 00:22:19.485 "auth": { 00:22:19.485 "dhgroup": "ffdhe4096", 00:22:19.485 "digest": "sha512", 00:22:19.485 "state": "completed" 00:22:19.485 }, 00:22:19.485 "cntlid": 123, 00:22:19.485 "listen_address": { 00:22:19.485 "adrfam": "IPv4", 00:22:19.485 "traddr": "10.0.0.2", 00:22:19.485 "trsvcid": "4420", 00:22:19.485 "trtype": "TCP" 00:22:19.485 }, 00:22:19.485 "peer_address": { 00:22:19.485 "adrfam": "IPv4", 00:22:19.485 "traddr": "10.0.0.1", 00:22:19.485 "trsvcid": "43346", 00:22:19.485 "trtype": "TCP" 00:22:19.485 }, 00:22:19.485 "qid": 0, 00:22:19.485 "state": "enabled", 00:22:19.485 "thread": "nvmf_tgt_poll_group_000" 00:22:19.485 } 00:22:19.485 ]' 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.485 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.743 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.743 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.743 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.743 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.743 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.002 00:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:22:20.568 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.568 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:20.568 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.568 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.568 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.569 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.569 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:20.569 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.135 00:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.392 00:22:21.393 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.393 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.393 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.650 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.651 { 00:22:21.651 "auth": { 00:22:21.651 "dhgroup": "ffdhe4096", 00:22:21.651 "digest": "sha512", 00:22:21.651 "state": "completed" 00:22:21.651 }, 00:22:21.651 "cntlid": 125, 00:22:21.651 "listen_address": { 00:22:21.651 "adrfam": "IPv4", 00:22:21.651 "traddr": "10.0.0.2", 00:22:21.651 "trsvcid": "4420", 00:22:21.651 "trtype": "TCP" 00:22:21.651 }, 00:22:21.651 "peer_address": { 00:22:21.651 "adrfam": "IPv4", 00:22:21.651 "traddr": "10.0.0.1", 00:22:21.651 "trsvcid": "43364", 00:22:21.651 "trtype": "TCP" 00:22:21.651 }, 00:22:21.651 "qid": 0, 00:22:21.651 "state": "enabled", 00:22:21.651 "thread": "nvmf_tgt_poll_group_000" 00:22:21.651 } 00:22:21.651 ]' 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.651 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.908 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.908 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.908 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.908 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.908 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.167 00:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.101 00:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.667 00:22:23.667 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.667 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.667 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.926 { 00:22:23.926 "auth": { 00:22:23.926 "dhgroup": "ffdhe4096", 00:22:23.926 "digest": "sha512", 00:22:23.926 "state": "completed" 00:22:23.926 }, 00:22:23.926 "cntlid": 127, 00:22:23.926 "listen_address": { 00:22:23.926 "adrfam": "IPv4", 00:22:23.926 "traddr": "10.0.0.2", 00:22:23.926 "trsvcid": "4420", 00:22:23.926 "trtype": "TCP" 00:22:23.926 }, 00:22:23.926 "peer_address": { 00:22:23.926 "adrfam": "IPv4", 00:22:23.926 "traddr": "10.0.0.1", 00:22:23.926 "trsvcid": "43390", 00:22:23.926 "trtype": "TCP" 00:22:23.926 }, 00:22:23.926 "qid": 0, 00:22:23.926 "state": "enabled", 00:22:23.926 "thread": "nvmf_tgt_poll_group_000" 00:22:23.926 } 00:22:23.926 ]' 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.926 00:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.493 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.060 00:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.318 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.886 00:22:25.886 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.886 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.886 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.144 { 00:22:26.144 "auth": { 00:22:26.144 "dhgroup": "ffdhe6144", 00:22:26.144 "digest": "sha512", 00:22:26.144 "state": "completed" 00:22:26.144 }, 00:22:26.144 "cntlid": 129, 00:22:26.144 "listen_address": { 00:22:26.144 "adrfam": "IPv4", 00:22:26.144 "traddr": "10.0.0.2", 00:22:26.144 "trsvcid": "4420", 00:22:26.144 "trtype": "TCP" 00:22:26.144 }, 00:22:26.144 "peer_address": { 00:22:26.144 "adrfam": "IPv4", 00:22:26.144 "traddr": "10.0.0.1", 00:22:26.144 "trsvcid": "43414", 00:22:26.144 "trtype": "TCP" 00:22:26.144 }, 00:22:26.144 "qid": 0, 00:22:26.144 "state": "enabled", 00:22:26.144 "thread": "nvmf_tgt_poll_group_000" 00:22:26.144 } 00:22:26.144 ]' 00:22:26.144 00:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.144 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.144 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.144 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.144 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.403 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.403 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.403 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.661 00:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:27.228 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.228 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:27.228 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.228 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.486 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.744 00:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.744 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.744 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.001 00:22:28.259 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.259 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.259 00:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.517 { 00:22:28.517 "auth": { 00:22:28.517 "dhgroup": "ffdhe6144", 00:22:28.517 "digest": "sha512", 00:22:28.517 "state": "completed" 00:22:28.517 }, 00:22:28.517 "cntlid": 131, 00:22:28.517 "listen_address": { 00:22:28.517 "adrfam": "IPv4", 00:22:28.517 "traddr": "10.0.0.2", 00:22:28.517 "trsvcid": "4420", 00:22:28.517 "trtype": "TCP" 00:22:28.517 }, 00:22:28.517 "peer_address": { 00:22:28.517 "adrfam": "IPv4", 00:22:28.517 "traddr": "10.0.0.1", 00:22:28.517 "trsvcid": "43440", 00:22:28.517 "trtype": "TCP" 00:22:28.517 }, 00:22:28.517 "qid": 0, 00:22:28.517 "state": "enabled", 00:22:28.517 "thread": "nvmf_tgt_poll_group_000" 00:22:28.517 } 00:22:28.517 ]' 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.517 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.518 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.775 00:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.740 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.999 00:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.567 00:22:30.567 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.567 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.567 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.827 { 00:22:30.827 "auth": { 00:22:30.827 "dhgroup": "ffdhe6144", 00:22:30.827 "digest": "sha512", 00:22:30.827 "state": "completed" 00:22:30.827 }, 00:22:30.827 "cntlid": 133, 00:22:30.827 "listen_address": { 00:22:30.827 "adrfam": "IPv4", 00:22:30.827 "traddr": "10.0.0.2", 00:22:30.827 "trsvcid": "4420", 00:22:30.827 "trtype": "TCP" 00:22:30.827 }, 00:22:30.827 "peer_address": { 00:22:30.827 "adrfam": "IPv4", 00:22:30.827 "traddr": "10.0.0.1", 00:22:30.827 "trsvcid": "39392", 00:22:30.827 "trtype": "TCP" 00:22:30.827 }, 00:22:30.827 "qid": 0, 00:22:30.827 "state": "enabled", 00:22:30.827 "thread": "nvmf_tgt_poll_group_000" 00:22:30.827 } 00:22:30.827 ]' 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.827 00:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.085 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.021 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.280 00:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.280 00:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.280 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.280 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.848 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.848 00:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.106 00:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.106 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.106 { 00:22:33.106 "auth": { 00:22:33.106 "dhgroup": "ffdhe6144", 00:22:33.106 "digest": "sha512", 00:22:33.106 "state": "completed" 00:22:33.106 }, 00:22:33.106 "cntlid": 135, 00:22:33.106 "listen_address": { 00:22:33.106 "adrfam": "IPv4", 00:22:33.106 "traddr": "10.0.0.2", 00:22:33.106 "trsvcid": "4420", 00:22:33.106 "trtype": "TCP" 00:22:33.106 }, 00:22:33.106 "peer_address": { 00:22:33.106 "adrfam": "IPv4", 00:22:33.106 "traddr": "10.0.0.1", 00:22:33.106 "trsvcid": "39406", 00:22:33.106 "trtype": "TCP" 00:22:33.106 }, 00:22:33.106 "qid": 0, 00:22:33.107 "state": "enabled", 00:22:33.107 "thread": "nvmf_tgt_poll_group_000" 00:22:33.107 } 00:22:33.107 ]' 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.107 00:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.365 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.932 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.933 00:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.191 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.126 00:22:35.127 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.127 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.127 00:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.385 { 00:22:35.385 "auth": { 00:22:35.385 "dhgroup": "ffdhe8192", 00:22:35.385 "digest": "sha512", 00:22:35.385 "state": "completed" 00:22:35.385 }, 00:22:35.385 "cntlid": 137, 00:22:35.385 "listen_address": { 00:22:35.385 "adrfam": "IPv4", 00:22:35.385 "traddr": "10.0.0.2", 00:22:35.385 "trsvcid": "4420", 00:22:35.385 "trtype": "TCP" 00:22:35.385 }, 00:22:35.385 "peer_address": { 00:22:35.385 "adrfam": "IPv4", 00:22:35.385 "traddr": "10.0.0.1", 00:22:35.385 "trsvcid": "39432", 00:22:35.385 "trtype": "TCP" 00:22:35.385 }, 00:22:35.385 "qid": 0, 00:22:35.385 "state": "enabled", 00:22:35.385 "thread": "nvmf_tgt_poll_group_000" 00:22:35.385 } 00:22:35.385 ]' 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.385 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.643 00:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.579 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.838 00:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.405 00:22:37.405 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.405 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.405 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.664 { 00:22:37.664 "auth": { 00:22:37.664 "dhgroup": "ffdhe8192", 00:22:37.664 "digest": "sha512", 00:22:37.664 "state": "completed" 00:22:37.664 }, 00:22:37.664 "cntlid": 139, 00:22:37.664 "listen_address": { 00:22:37.664 "adrfam": "IPv4", 00:22:37.664 "traddr": "10.0.0.2", 00:22:37.664 "trsvcid": "4420", 00:22:37.664 "trtype": "TCP" 00:22:37.664 }, 00:22:37.664 "peer_address": { 00:22:37.664 "adrfam": "IPv4", 00:22:37.664 "traddr": "10.0.0.1", 00:22:37.664 "trsvcid": "39460", 00:22:37.664 "trtype": "TCP" 00:22:37.664 }, 00:22:37.664 "qid": 0, 00:22:37.664 "state": "enabled", 00:22:37.664 "thread": "nvmf_tgt_poll_group_000" 00:22:37.664 } 00:22:37.664 ]' 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.664 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.924 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.924 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.924 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.924 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.924 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.212 00:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:01:YmM5NWU1MTc2Mjc0YjZkZmE1MTU4ZDIzNmRjYjlkODiwIPyQ: --dhchap-ctrl-secret DHHC-1:02:YWMzY2E4MzVmZGQ5NDM3YTE2ZjA0Zjk0MmRmMGYzZTE3MmUzMzEyYjcyMzdhYjc57FguxQ==: 00:22:38.783 00:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.783 00:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:38.783 00:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.783 00:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.041 00:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.041 00:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.041 00:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.041 00:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.300 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.868 00:22:39.868 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.868 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.868 00:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.435 { 00:22:40.435 "auth": { 00:22:40.435 "dhgroup": "ffdhe8192", 00:22:40.435 "digest": "sha512", 00:22:40.435 "state": "completed" 00:22:40.435 }, 00:22:40.435 "cntlid": 141, 00:22:40.435 "listen_address": { 00:22:40.435 "adrfam": "IPv4", 00:22:40.435 "traddr": "10.0.0.2", 00:22:40.435 "trsvcid": "4420", 00:22:40.435 "trtype": "TCP" 00:22:40.435 }, 00:22:40.435 "peer_address": { 00:22:40.435 "adrfam": "IPv4", 00:22:40.435 "traddr": "10.0.0.1", 00:22:40.435 "trsvcid": "54530", 00:22:40.435 "trtype": "TCP" 00:22:40.435 }, 00:22:40.435 "qid": 0, 00:22:40.435 "state": "enabled", 00:22:40.435 "thread": "nvmf_tgt_poll_group_000" 00:22:40.435 } 00:22:40.435 ]' 00:22:40.435 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.436 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.694 00:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:02:NjYyMGQ5YWExNTkwZDU4YTIwOTA2MTg2YjdjNDJmMTRmYzAwM2NiNDIxYjVmYjViqsEpoA==: --dhchap-ctrl-secret DHHC-1:01:NTAxZTEyYjQ4OGRlOWEzN2UxN2U4MDU1NWFiMjY1YzgKw4J6: 00:22:41.261 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.262 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.520 00:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.454 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.454 00:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.724 { 00:22:42.724 "auth": { 00:22:42.724 "dhgroup": "ffdhe8192", 00:22:42.724 "digest": "sha512", 00:22:42.724 "state": "completed" 00:22:42.724 }, 00:22:42.724 "cntlid": 143, 00:22:42.724 "listen_address": { 00:22:42.724 "adrfam": "IPv4", 00:22:42.724 "traddr": "10.0.0.2", 00:22:42.724 "trsvcid": "4420", 00:22:42.724 "trtype": "TCP" 00:22:42.724 }, 00:22:42.724 "peer_address": { 00:22:42.724 "adrfam": "IPv4", 00:22:42.724 "traddr": "10.0.0.1", 00:22:42.724 "trsvcid": "54546", 00:22:42.724 "trtype": "TCP" 00:22:42.724 }, 00:22:42.724 "qid": 0, 00:22:42.724 "state": "enabled", 00:22:42.724 "thread": "nvmf_tgt_poll_group_000" 00:22:42.724 } 00:22:42.724 ]' 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.724 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.725 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.725 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.995 00:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:43.930 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:43.931 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:43.931 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.189 00:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.756 00:22:44.756 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:44.756 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:44.756 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.014 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.015 { 00:22:45.015 "auth": { 00:22:45.015 "dhgroup": "ffdhe8192", 00:22:45.015 "digest": "sha512", 00:22:45.015 "state": "completed" 00:22:45.015 }, 00:22:45.015 "cntlid": 145, 00:22:45.015 "listen_address": { 00:22:45.015 "adrfam": "IPv4", 00:22:45.015 "traddr": "10.0.0.2", 00:22:45.015 "trsvcid": "4420", 00:22:45.015 "trtype": "TCP" 00:22:45.015 }, 00:22:45.015 "peer_address": { 00:22:45.015 "adrfam": "IPv4", 00:22:45.015 "traddr": "10.0.0.1", 00:22:45.015 "trsvcid": "54584", 00:22:45.015 "trtype": "TCP" 00:22:45.015 }, 00:22:45.015 "qid": 0, 00:22:45.015 "state": "enabled", 00:22:45.015 "thread": "nvmf_tgt_poll_group_000" 00:22:45.015 } 00:22:45.015 ]' 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.015 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.273 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.273 00:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.273 00:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.273 00:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.273 00:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.531 00:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:00:ODQyMjgzYzgzNjI4MzFkNzI0OWYwNzJhNDc1NzdmYjgyN2U3MjA1MGFhZWNlZWUy7h0Wqg==: --dhchap-ctrl-secret DHHC-1:03:YjIzM2Y4MWFmOGM5YjllMzU3MjY1YWIxNGY1NzI1ZTA1ZGVkMDAwZmEzOTA1M2Q2MDNlYWVmMjlhNDFkZTYyZbOsHys=: 00:22:46.098 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.098 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:46.098 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.098 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.357 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.990 2024/07/12 00:41:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:46.990 request: 00:22:46.990 { 00:22:46.990 "method": "bdev_nvme_attach_controller", 00:22:46.990 "params": { 00:22:46.990 "name": "nvme0", 00:22:46.990 "trtype": "tcp", 00:22:46.990 "traddr": "10.0.0.2", 00:22:46.990 "adrfam": "ipv4", 00:22:46.990 "trsvcid": "4420", 00:22:46.990 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:46.990 "prchk_reftag": false, 00:22:46.990 "prchk_guard": false, 00:22:46.990 "hdgst": false, 00:22:46.990 "ddgst": false, 00:22:46.990 "dhchap_key": "key2" 00:22:46.990 } 00:22:46.990 } 00:22:46.990 Got JSON-RPC error response 00:22:46.990 GoRPCClient: error on JSON-RPC call 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:46.990 00:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:47.559 2024/07/12 00:41:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:47.559 request: 00:22:47.559 { 00:22:47.559 "method": "bdev_nvme_attach_controller", 00:22:47.559 "params": { 00:22:47.559 "name": "nvme0", 00:22:47.559 "trtype": "tcp", 00:22:47.559 "traddr": "10.0.0.2", 00:22:47.559 "adrfam": "ipv4", 00:22:47.559 "trsvcid": "4420", 00:22:47.559 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:47.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:47.559 "prchk_reftag": false, 00:22:47.559 "prchk_guard": false, 00:22:47.559 "hdgst": false, 00:22:47.559 "ddgst": false, 00:22:47.559 "dhchap_key": "key1", 00:22:47.559 "dhchap_ctrlr_key": "ckey2" 00:22:47.559 } 00:22:47.559 } 00:22:47.559 Got JSON-RPC error response 00:22:47.559 GoRPCClient: error on JSON-RPC call 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key1 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:47.559 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.560 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.560 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.125 2024/07/12 00:41:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:48.125 request: 00:22:48.125 { 00:22:48.125 "method": "bdev_nvme_attach_controller", 00:22:48.126 "params": { 00:22:48.126 "name": "nvme0", 00:22:48.126 "trtype": "tcp", 00:22:48.126 "traddr": "10.0.0.2", 00:22:48.126 "adrfam": "ipv4", 00:22:48.126 "trsvcid": "4420", 00:22:48.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:48.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:48.126 "prchk_reftag": false, 00:22:48.126 "prchk_guard": false, 00:22:48.126 "hdgst": false, 00:22:48.126 "ddgst": false, 00:22:48.126 "dhchap_key": "key1", 00:22:48.126 "dhchap_ctrlr_key": "ckey1" 00:22:48.126 } 00:22:48.126 } 00:22:48.126 Got JSON-RPC error response 00:22:48.126 GoRPCClient: error on JSON-RPC call 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 85363 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 85363 ']' 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 85363 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85363 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:48.126 killing process with pid 85363 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85363' 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 85363 00:22:48.126 00:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 85363 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=90254 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 90254 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 90254 ']' 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.500 00:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 90254 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 90254 ']' 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.435 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.693 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.693 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:50.693 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:50.693 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.693 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.951 00:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.952 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:50.952 00:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:51.884 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.884 00:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.142 { 00:22:52.142 "auth": { 00:22:52.142 "dhgroup": "ffdhe8192", 00:22:52.142 "digest": "sha512", 00:22:52.142 "state": "completed" 00:22:52.142 }, 00:22:52.142 "cntlid": 1, 00:22:52.142 "listen_address": { 00:22:52.142 "adrfam": "IPv4", 00:22:52.142 "traddr": "10.0.0.2", 00:22:52.142 "trsvcid": "4420", 00:22:52.142 "trtype": "TCP" 00:22:52.142 }, 00:22:52.142 "peer_address": { 00:22:52.142 "adrfam": "IPv4", 00:22:52.142 "traddr": "10.0.0.1", 00:22:52.142 "trsvcid": "40420", 00:22:52.142 "trtype": "TCP" 00:22:52.142 }, 00:22:52.142 "qid": 0, 00:22:52.142 "state": "enabled", 00:22:52.142 "thread": "nvmf_tgt_poll_group_000" 00:22:52.142 } 00:22:52.142 ]' 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.142 00:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.399 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid 637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-secret DHHC-1:03:Nzk0M2Y2YWYxMjYxN2U4YzhkMDZmOGE3ZDBiNzdkMTAxMzZlOGU4NzkxMDI1NjE5MTE4ODMyMWU1MDkxYjQ5NvrqLR0=: 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --dhchap-key key3 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:53.334 00:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.334 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.592 2024/07/12 00:41:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:53.592 request: 00:22:53.592 { 00:22:53.592 "method": "bdev_nvme_attach_controller", 00:22:53.592 "params": { 00:22:53.592 "name": "nvme0", 00:22:53.592 "trtype": "tcp", 00:22:53.592 "traddr": "10.0.0.2", 00:22:53.592 "adrfam": "ipv4", 00:22:53.592 "trsvcid": "4420", 00:22:53.592 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:53.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:53.592 "prchk_reftag": false, 00:22:53.592 "prchk_guard": false, 00:22:53.592 "hdgst": false, 00:22:53.592 "ddgst": false, 00:22:53.592 "dhchap_key": "key3" 00:22:53.592 } 00:22:53.592 } 00:22:53.592 Got JSON-RPC error response 00:22:53.592 GoRPCClient: error on JSON-RPC call 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:53.592 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.850 00:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.108 2024/07/12 00:41:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:54.108 request: 00:22:54.108 { 00:22:54.108 "method": "bdev_nvme_attach_controller", 00:22:54.108 "params": { 00:22:54.108 "name": "nvme0", 00:22:54.108 "trtype": "tcp", 00:22:54.108 "traddr": "10.0.0.2", 00:22:54.108 "adrfam": "ipv4", 00:22:54.108 "trsvcid": "4420", 00:22:54.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:54.108 "prchk_reftag": false, 00:22:54.108 "prchk_guard": false, 00:22:54.108 "hdgst": false, 00:22:54.108 "ddgst": false, 00:22:54.108 "dhchap_key": "key3" 00:22:54.108 } 00:22:54.108 } 00:22:54.108 Got JSON-RPC error response 00:22:54.108 GoRPCClient: error on JSON-RPC call 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.365 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.622 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.879 2024/07/12 00:41:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:22:54.879 request: 00:22:54.879 { 00:22:54.879 "method": "bdev_nvme_attach_controller", 00:22:54.879 "params": { 00:22:54.879 "name": "nvme0", 00:22:54.879 "trtype": "tcp", 00:22:54.879 "traddr": "10.0.0.2", 00:22:54.879 "adrfam": "ipv4", 00:22:54.879 "trsvcid": "4420", 00:22:54.879 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea", 00:22:54.879 "prchk_reftag": false, 00:22:54.879 "prchk_guard": false, 00:22:54.879 "hdgst": false, 00:22:54.879 "ddgst": false, 00:22:54.879 "dhchap_key": "key0", 00:22:54.879 "dhchap_ctrlr_key": "key1" 00:22:54.879 } 00:22:54.879 } 00:22:54.879 Got JSON-RPC error response 00:22:54.879 GoRPCClient: error on JSON-RPC call 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:54.879 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:55.136 00:22:55.136 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:55.136 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:55.136 00:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.464 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.464 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.464 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 85407 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 85407 ']' 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 85407 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85407 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:55.722 killing process with pid 85407 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85407' 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 85407 00:22:55.722 00:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 85407 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.271 rmmod nvme_tcp 00:22:58.271 rmmod nvme_fabrics 00:22:58.271 rmmod nvme_keyring 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:58.271 00:42:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 90254 ']' 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 90254 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 90254 ']' 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 90254 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90254 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.271 killing process with pid 90254 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90254' 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 90254 00:22:58.271 00:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 90254 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.9Pm /tmp/spdk.key-sha256.AKQ /tmp/spdk.key-sha384.2Xv /tmp/spdk.key-sha512.CYb /tmp/spdk.key-sha512.SAD /tmp/spdk.key-sha384.2xX /tmp/spdk.key-sha256.9OO '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:22:59.646 00:22:59.646 real 3m2.113s 00:22:59.646 user 7m16.361s 00:22:59.646 sys 0m23.211s 00:22:59.646 00:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.646 ************************************ 00:22:59.647 END TEST nvmf_auth_target 00:22:59.647 00:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.647 ************************************ 00:22:59.647 00:42:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.647 00:42:04 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:59.647 00:42:04 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:59.647 00:42:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:59.647 00:42:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.647 00:42:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.647 ************************************ 00:22:59.647 START TEST nvmf_bdevio_no_huge 00:22:59.647 ************************************ 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:59.647 * Looking for test storage... 00:22:59.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.647 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:59.905 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.905 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:59.905 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.905 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.905 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:59.906 Cannot find device "nvmf_tgt_br" 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:59.906 Cannot find device "nvmf_tgt_br2" 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:59.906 Cannot find device "nvmf_tgt_br" 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:59.906 Cannot find device "nvmf_tgt_br2" 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:59.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:59.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:59.906 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:00.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:00.164 00:23:00.164 --- 10.0.0.2 ping statistics --- 00:23:00.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.164 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:00.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:00.164 00:23:00.164 --- 10.0.0.3 ping statistics --- 00:23:00.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.164 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:00.164 00:23:00.164 --- 10.0.0.1 ping statistics --- 00:23:00.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.164 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=90700 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 90700 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 90700 ']' 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.164 00:42:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.422 [2024-07-12 00:42:05.101017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:00.422 [2024-07-12 00:42:05.101270] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:00.422 [2024-07-12 00:42:05.322810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.988 [2024-07-12 00:42:05.660065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.988 [2024-07-12 00:42:05.660151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.988 [2024-07-12 00:42:05.660215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.988 [2024-07-12 00:42:05.660237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.988 [2024-07-12 00:42:05.660258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.988 [2024-07-12 00:42:05.660500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.988 [2024-07-12 00:42:05.660632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:00.988 [2024-07-12 00:42:05.661311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.988 [2024-07-12 00:42:05.661328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.246 [2024-07-12 00:42:06.146029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.246 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.504 Malloc0 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.504 [2024-07-12 00:42:06.243903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.504 { 00:23:01.504 "params": { 00:23:01.504 "name": "Nvme$subsystem", 00:23:01.504 "trtype": "$TEST_TRANSPORT", 00:23:01.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.504 "adrfam": "ipv4", 00:23:01.504 "trsvcid": "$NVMF_PORT", 00:23:01.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.504 "hdgst": ${hdgst:-false}, 00:23:01.504 "ddgst": ${ddgst:-false} 00:23:01.504 }, 00:23:01.504 "method": "bdev_nvme_attach_controller" 00:23:01.504 } 00:23:01.504 EOF 00:23:01.504 )") 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:01.504 00:42:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.504 "params": { 00:23:01.504 "name": "Nvme1", 00:23:01.504 "trtype": "tcp", 00:23:01.504 "traddr": "10.0.0.2", 00:23:01.504 "adrfam": "ipv4", 00:23:01.504 "trsvcid": "4420", 00:23:01.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.504 "hdgst": false, 00:23:01.504 "ddgst": false 00:23:01.504 }, 00:23:01.504 "method": "bdev_nvme_attach_controller" 00:23:01.504 }' 00:23:01.504 [2024-07-12 00:42:06.345925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:01.504 [2024-07-12 00:42:06.346652] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid90754 ] 00:23:01.761 [2024-07-12 00:42:06.534119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.018 [2024-07-12 00:42:06.841368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.018 [2024-07-12 00:42:06.841521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.018 [2024-07-12 00:42:06.841726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.582 I/O targets: 00:23:02.582 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:02.582 00:23:02.582 00:23:02.582 CUnit - A unit testing framework for C - Version 2.1-3 00:23:02.582 http://cunit.sourceforge.net/ 00:23:02.582 00:23:02.582 00:23:02.582 Suite: bdevio tests on: Nvme1n1 00:23:02.582 Test: blockdev write read block ...passed 00:23:02.582 Test: blockdev write zeroes read block ...passed 00:23:02.582 Test: blockdev write zeroes read no split ...passed 00:23:02.582 Test: blockdev write zeroes read split ...passed 00:23:02.582 Test: blockdev write zeroes read split partial ...passed 00:23:02.583 Test: blockdev reset ...[2024-07-12 00:42:07.407520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.583 [2024-07-12 00:42:07.408113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:23:02.583 [2024-07-12 00:42:07.421688] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:02.583 passed 00:23:02.583 Test: blockdev write read 8 blocks ...passed 00:23:02.583 Test: blockdev write read size > 128k ...passed 00:23:02.583 Test: blockdev write read invalid size ...passed 00:23:02.583 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.583 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.583 Test: blockdev write read max offset ...passed 00:23:02.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.840 Test: blockdev writev readv 8 blocks ...passed 00:23:02.840 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.840 Test: blockdev writev readv block ...passed 00:23:02.840 Test: blockdev writev readv size > 128k ...passed 00:23:02.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.840 Test: blockdev comparev and writev ...[2024-07-12 00:42:07.599054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.599138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.599169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.599187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.599783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.599824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.599868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.600482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.600540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.600577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.600594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.601054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.601094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:02.840 [2024-07-12 00:42:07.601122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:02.840 [2024-07-12 00:42:07.601137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:02.840 passed 00:23:02.840 Test: blockdev nvme passthru rw ...passed 00:23:02.840 Test: blockdev nvme passthru vendor specific ...[2024-07-12 00:42:07.683926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.841 [2024-07-12 00:42:07.683991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:02.841 [2024-07-12 00:42:07.684217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.841 [2024-07-12 00:42:07.684243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:02.841 [2024-07-12 00:42:07.684444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.841 [2024-07-12 00:42:07.684474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:02.841 [2024-07-12 00:42:07.684676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:02.841 [2024-07-12 00:42:07.684712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:02.841 passed 00:23:02.841 Test: blockdev nvme admin passthru ...passed 00:23:02.841 Test: blockdev copy ...passed 00:23:02.841 00:23:02.841 Run Summary: Type Total Ran Passed Failed Inactive 00:23:02.841 suites 1 1 n/a 0 0 00:23:02.841 tests 23 23 23 0 0 00:23:02.841 asserts 152 152 152 0 n/a 00:23:02.841 00:23:02.841 Elapsed time = 1.011 seconds 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.775 rmmod nvme_tcp 00:23:03.775 rmmod nvme_fabrics 00:23:03.775 rmmod nvme_keyring 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 90700 ']' 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 90700 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 90700 ']' 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 90700 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90700 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:03.775 killing process with pid 90700 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90700' 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 90700 00:23:03.775 00:42:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 90700 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.710 00:23:04.710 real 0m5.161s 00:23:04.710 user 0m18.664s 00:23:04.710 sys 0m1.676s 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.710 00:42:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:04.710 ************************************ 00:23:04.710 END TEST nvmf_bdevio_no_huge 00:23:04.710 ************************************ 00:23:04.969 00:42:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:04.969 00:42:09 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:04.969 00:42:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:04.969 00:42:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.969 00:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.969 ************************************ 00:23:04.969 START TEST nvmf_tls 00:23:04.969 ************************************ 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:04.969 * Looking for test storage... 00:23:04.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.969 Cannot find device "nvmf_tgt_br" 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.969 Cannot find device "nvmf_tgt_br2" 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.969 Cannot find device "nvmf_tgt_br" 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.969 Cannot find device "nvmf_tgt_br2" 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:23:04.969 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.228 00:42:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:23:05.228 00:23:05.228 --- 10.0.0.2 ping statistics --- 00:23:05.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.228 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:05.228 00:23:05.228 --- 10.0.0.3 ping statistics --- 00:23:05.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.228 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:05.228 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:05.228 00:23:05.228 --- 10.0.0.1 ping statistics --- 00:23:05.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.228 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=90977 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 90977 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 90977 ']' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.229 00:42:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.487 [2024-07-12 00:42:10.247507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:05.487 [2024-07-12 00:42:10.247712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.749 [2024-07-12 00:42:10.425270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.012 [2024-07-12 00:42:10.706368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.012 [2024-07-12 00:42:10.706491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.012 [2024-07-12 00:42:10.706508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.012 [2024-07-12 00:42:10.706523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.012 [2024-07-12 00:42:10.706549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.012 [2024-07-12 00:42:10.706606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:06.270 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:06.527 true 00:23:06.785 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:06.785 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.043 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:07.043 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:07.043 00:42:11 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:07.301 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.301 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:07.558 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:07.558 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:07.558 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:07.816 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.816 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:08.073 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:08.073 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:08.073 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:08.073 00:42:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:08.330 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:08.330 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:08.330 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:08.587 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:08.587 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:08.844 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:08.844 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:08.844 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:09.102 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.102 00:42:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.FHpIGdeYIr 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.jkOLbLZgFa 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FHpIGdeYIr 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.jkOLbLZgFa 00:23:09.360 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:09.925 00:42:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:10.492 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FHpIGdeYIr 00:23:10.492 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FHpIGdeYIr 00:23:10.492 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.492 [2024-07-12 00:42:15.408850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.750 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.009 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.267 [2024-07-12 00:42:15.957001] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.267 [2024-07-12 00:42:15.957282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.267 00:42:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.525 malloc0 00:23:11.525 00:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.784 00:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FHpIGdeYIr 00:23:12.042 [2024-07-12 00:42:16.804725] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:12.042 00:42:16 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FHpIGdeYIr 00:23:24.252 Initializing NVMe Controllers 00:23:24.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.252 Initialization complete. Launching workers. 00:23:24.252 ======================================================== 00:23:24.252 Latency(us) 00:23:24.252 Device Information : IOPS MiB/s Average min max 00:23:24.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6290.60 24.57 10177.69 2461.47 11934.94 00:23:24.252 ======================================================== 00:23:24.252 Total : 6290.60 24.57 10177.69 2461.47 11934.94 00:23:24.252 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FHpIGdeYIr 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FHpIGdeYIr' 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91335 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91335 /var/tmp/bdevperf.sock 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91335 ']' 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.252 00:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.252 [2024-07-12 00:42:27.289506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:24.252 [2024-07-12 00:42:27.290110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91335 ] 00:23:24.252 [2024-07-12 00:42:27.466158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.252 [2024-07-12 00:42:27.774908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.252 00:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.252 00:42:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.252 00:42:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FHpIGdeYIr 00:23:24.253 [2024-07-12 00:42:28.535104] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.253 [2024-07-12 00:42:28.535299] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.253 TLSTESTn1 00:23:24.253 00:42:28 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.253 Running I/O for 10 seconds... 00:23:34.301 00:23:34.301 Latency(us) 00:23:34.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.301 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.301 Verification LBA range: start 0x0 length 0x2000 00:23:34.301 TLSTESTn1 : 10.05 2673.45 10.44 0.00 0.00 47752.88 9234.62 29550.78 00:23:34.301 =================================================================================================================== 00:23:34.301 Total : 2673.45 10.44 0.00 0.00 47752.88 9234.62 29550.78 00:23:34.301 0 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 91335 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91335 ']' 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91335 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91335 00:23:34.301 killing process with pid 91335 00:23:34.301 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.301 00:23:34.301 Latency(us) 00:23:34.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.301 =================================================================================================================== 00:23:34.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91335' 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91335 00:23:34.301 00:42:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91335 00:23:34.301 [2024-07-12 00:42:38.861903] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jkOLbLZgFa 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jkOLbLZgFa 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jkOLbLZgFa 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jkOLbLZgFa' 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91489 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91489 /var/tmp/bdevperf.sock 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91489 ']' 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.237 00:42:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.496 [2024-07-12 00:42:40.273002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:35.496 [2024-07-12 00:42:40.273207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91489 ] 00:23:35.754 [2024-07-12 00:42:40.444867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.012 [2024-07-12 00:42:40.704564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.579 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.579 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.579 00:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jkOLbLZgFa 00:23:36.837 [2024-07-12 00:42:41.520035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.837 [2024-07-12 00:42:41.520239] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:36.837 [2024-07-12 00:42:41.530223] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:36.837 [2024-07-12 00:42:41.531053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:23:36.837 [2024-07-12 00:42:41.532021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:36.837 [2024-07-12 00:42:41.533020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.837 [2024-07-12 00:42:41.533058] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:36.837 [2024-07-12 00:42:41.533084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.837 2024/07/12 00:42:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.jkOLbLZgFa subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:36.837 request: 00:23:36.837 { 00:23:36.838 "method": "bdev_nvme_attach_controller", 00:23:36.838 "params": { 00:23:36.838 "name": "TLSTEST", 00:23:36.838 "trtype": "tcp", 00:23:36.838 "traddr": "10.0.0.2", 00:23:36.838 "adrfam": "ipv4", 00:23:36.838 "trsvcid": "4420", 00:23:36.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.838 "prchk_reftag": false, 00:23:36.838 "prchk_guard": false, 00:23:36.838 "hdgst": false, 00:23:36.838 "ddgst": false, 00:23:36.838 "psk": "/tmp/tmp.jkOLbLZgFa" 00:23:36.838 } 00:23:36.838 } 00:23:36.838 Got JSON-RPC error response 00:23:36.838 GoRPCClient: error on JSON-RPC call 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 91489 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91489 ']' 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91489 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91489 00:23:36.838 killing process with pid 91489 00:23:36.838 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.838 00:23:36.838 Latency(us) 00:23:36.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.838 =================================================================================================================== 00:23:36.838 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91489' 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91489 00:23:36.838 00:42:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91489 00:23:36.838 [2024-07-12 00:42:41.584936] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FHpIGdeYIr 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FHpIGdeYIr 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.830 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FHpIGdeYIr 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FHpIGdeYIr' 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91550 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91550 /var/tmp/bdevperf.sock 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91550 ']' 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.831 00:42:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.089 [2024-07-12 00:42:42.871833] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:38.089 [2024-07-12 00:42:42.872031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91550 ] 00:23:38.348 [2024-07-12 00:42:43.048569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.348 [2024-07-12 00:42:43.269382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.913 00:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.913 00:42:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:38.913 00:42:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FHpIGdeYIr 00:23:39.172 [2024-07-12 00:42:44.036266] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.172 [2024-07-12 00:42:44.036918] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:39.172 [2024-07-12 00:42:44.047859] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:39.172 [2024-07-12 00:42:44.048384] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:39.172 [2024-07-12 00:42:44.048600] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:39.172 [2024-07-12 00:42:44.048896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:23:39.172 [2024-07-12 00:42:44.049863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:39.172 [2024-07-12 00:42:44.050853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:39.172 [2024-07-12 00:42:44.050909] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:39.172 [2024-07-12 00:42:44.050928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:39.172 2024/07/12 00:42:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.FHpIGdeYIr subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:39.172 request: 00:23:39.172 { 00:23:39.172 "method": "bdev_nvme_attach_controller", 00:23:39.172 "params": { 00:23:39.172 "name": "TLSTEST", 00:23:39.172 "trtype": "tcp", 00:23:39.172 "traddr": "10.0.0.2", 00:23:39.172 "adrfam": "ipv4", 00:23:39.172 "trsvcid": "4420", 00:23:39.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.172 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:39.172 "prchk_reftag": false, 00:23:39.172 "prchk_guard": false, 00:23:39.172 "hdgst": false, 00:23:39.172 "ddgst": false, 00:23:39.172 "psk": "/tmp/tmp.FHpIGdeYIr" 00:23:39.172 } 00:23:39.172 } 00:23:39.172 Got JSON-RPC error response 00:23:39.172 GoRPCClient: error on JSON-RPC call 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 91550 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91550 ']' 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91550 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91550 00:23:39.172 killing process with pid 91550 00:23:39.172 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.172 00:23:39.172 Latency(us) 00:23:39.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.172 =================================================================================================================== 00:23:39.172 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91550' 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91550 00:23:39.172 [2024-07-12 00:42:44.099092] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:39.172 00:42:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91550 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FHpIGdeYIr 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FHpIGdeYIr 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:40.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FHpIGdeYIr 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FHpIGdeYIr' 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91598 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91598 /var/tmp/bdevperf.sock 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91598 ']' 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.549 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.550 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.550 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.550 00:42:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.550 [2024-07-12 00:42:45.362934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:40.550 [2024-07-12 00:42:45.363154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91598 ] 00:23:40.809 [2024-07-12 00:42:45.537277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.068 [2024-07-12 00:42:45.793652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.636 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.636 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.636 00:42:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FHpIGdeYIr 00:23:41.636 [2024-07-12 00:42:46.545783] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.636 [2024-07-12 00:42:46.545946] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:41.636 [2024-07-12 00:42:46.558417] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:41.636 [2024-07-12 00:42:46.558470] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:41.636 [2024-07-12 00:42:46.558537] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:41.636 [2024-07-12 00:42:46.558942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:23:41.636 [2024-07-12 00:42:46.559914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:23:41.636 [2024-07-12 00:42:46.560905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:41.636 [2024-07-12 00:42:46.560945] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:41.637 [2024-07-12 00:42:46.560967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:41.637 2024/07/12 00:42:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.FHpIGdeYIr subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:41.637 request: 00:23:41.637 { 00:23:41.637 "method": "bdev_nvme_attach_controller", 00:23:41.637 "params": { 00:23:41.637 "name": "TLSTEST", 00:23:41.637 "trtype": "tcp", 00:23:41.637 "traddr": "10.0.0.2", 00:23:41.637 "adrfam": "ipv4", 00:23:41.637 "trsvcid": "4420", 00:23:41.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:41.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.637 "prchk_reftag": false, 00:23:41.637 "prchk_guard": false, 00:23:41.637 "hdgst": false, 00:23:41.637 "ddgst": false, 00:23:41.637 "psk": "/tmp/tmp.FHpIGdeYIr" 00:23:41.637 } 00:23:41.637 } 00:23:41.637 Got JSON-RPC error response 00:23:41.637 GoRPCClient: error on JSON-RPC call 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 91598 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91598 ']' 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91598 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91598 00:23:41.895 killing process with pid 91598 00:23:41.895 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.895 00:23:41.895 Latency(us) 00:23:41.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.895 =================================================================================================================== 00:23:41.895 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:41.895 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:41.896 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91598' 00:23:41.896 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91598 00:23:41.896 00:42:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91598 00:23:41.896 [2024-07-12 00:42:46.605995] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:43.279 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91657 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91657 /var/tmp/bdevperf.sock 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91657 ']' 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.280 00:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.280 [2024-07-12 00:42:47.955743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:43.280 [2024-07-12 00:42:47.956685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91657 ] 00:23:43.280 [2024-07-12 00:42:48.129862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.538 [2024-07-12 00:42:48.387243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.106 00:42:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.106 00:42:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.106 00:42:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:44.364 [2024-07-12 00:42:49.159068] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:44.364 [2024-07-12 00:42:49.161033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:23:44.364 [2024-07-12 00:42:49.162028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.364 [2024-07-12 00:42:49.162061] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:44.364 [2024-07-12 00:42:49.162084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.364 2024/07/12 00:42:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:44.364 request: 00:23:44.364 { 00:23:44.364 "method": "bdev_nvme_attach_controller", 00:23:44.364 "params": { 00:23:44.364 "name": "TLSTEST", 00:23:44.364 "trtype": "tcp", 00:23:44.364 "traddr": "10.0.0.2", 00:23:44.364 "adrfam": "ipv4", 00:23:44.364 "trsvcid": "4420", 00:23:44.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.364 "prchk_reftag": false, 00:23:44.364 "prchk_guard": false, 00:23:44.364 "hdgst": false, 00:23:44.364 "ddgst": false 00:23:44.364 } 00:23:44.364 } 00:23:44.364 Got JSON-RPC error response 00:23:44.364 GoRPCClient: error on JSON-RPC call 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 91657 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91657 ']' 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91657 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91657 00:23:44.364 killing process with pid 91657 00:23:44.364 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.364 00:23:44.364 Latency(us) 00:23:44.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.364 =================================================================================================================== 00:23:44.364 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91657' 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91657 00:23:44.364 00:42:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91657 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 90977 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 90977 ']' 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 90977 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90977 00:23:45.740 killing process with pid 90977 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90977' 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 90977 00:23:45.740 [2024-07-12 00:42:50.463579] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.740 00:42:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 90977 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.j7IIFxUgrO 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.j7IIFxUgrO 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:47.116 00:42:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=91742 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 91742 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91742 ']' 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.116 00:42:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.376 [2024-07-12 00:42:52.139840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:47.376 [2024-07-12 00:42:52.140111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.634 [2024-07-12 00:42:52.319111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.893 [2024-07-12 00:42:52.594799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.893 [2024-07-12 00:42:52.594896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.893 [2024-07-12 00:42:52.594914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.893 [2024-07-12 00:42:52.594930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.893 [2024-07-12 00:42:52.594941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.893 [2024-07-12 00:42:52.594983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.155 00:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.155 00:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:48.155 00:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.155 00:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.155 00:42:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.415 00:42:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.415 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:23:48.415 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.j7IIFxUgrO 00:23:48.415 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.674 [2024-07-12 00:42:53.392108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.674 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.931 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:49.189 [2024-07-12 00:42:53.948301] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.189 [2024-07-12 00:42:53.948600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.189 00:42:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.448 malloc0 00:23:49.448 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:23:50.015 [2024-07-12 00:42:54.886326] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j7IIFxUgrO 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.j7IIFxUgrO' 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91845 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91845 /var/tmp/bdevperf.sock 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91845 ']' 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.015 00:42:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.274 [2024-07-12 00:42:55.029689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:50.274 [2024-07-12 00:42:55.029931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91845 ] 00:23:50.532 [2024-07-12 00:42:55.219524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.789 [2024-07-12 00:42:55.493818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.356 00:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.356 00:42:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:51.356 00:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:23:51.356 [2024-07-12 00:42:56.252133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.356 [2024-07-12 00:42:56.252390] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:51.614 TLSTESTn1 00:23:51.614 00:42:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:51.614 Running I/O for 10 seconds... 00:24:01.637 00:24:01.637 Latency(us) 00:24:01.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.637 Verification LBA range: start 0x0 length 0x2000 00:24:01.637 TLSTESTn1 : 10.02 2802.44 10.95 0.00 0.00 45576.97 9532.51 37176.79 00:24:01.637 =================================================================================================================== 00:24:01.637 Total : 2802.44 10.95 0.00 0.00 45576.97 9532.51 37176.79 00:24:01.637 0 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 91845 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91845 ']' 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91845 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91845 00:24:01.637 killing process with pid 91845 00:24:01.637 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.637 00:24:01.637 Latency(us) 00:24:01.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.637 =================================================================================================================== 00:24:01.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91845' 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91845 00:24:01.637 [2024-07-12 00:43:06.546188] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:01.637 00:43:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91845 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.j7IIFxUgrO 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j7IIFxUgrO 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j7IIFxUgrO 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.j7IIFxUgrO 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.j7IIFxUgrO' 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=91999 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 91999 /var/tmp/bdevperf.sock 00:24:03.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 91999 ']' 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.008 00:43:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.008 [2024-07-12 00:43:07.931805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:03.008 [2024-07-12 00:43:07.931987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91999 ] 00:24:03.316 [2024-07-12 00:43:08.106321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.574 [2024-07-12 00:43:08.359982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.139 00:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.139 00:43:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:04.139 00:43:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:24:04.139 [2024-07-12 00:43:09.042494] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.139 [2024-07-12 00:43:09.042579] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:04.139 [2024-07-12 00:43:09.042605] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.j7IIFxUgrO 00:24:04.139 2024/07/12 00:43:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.j7IIFxUgrO subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:24:04.139 request: 00:24:04.139 { 00:24:04.139 "method": "bdev_nvme_attach_controller", 00:24:04.139 "params": { 00:24:04.139 "name": "TLSTEST", 00:24:04.139 "trtype": "tcp", 00:24:04.139 "traddr": "10.0.0.2", 00:24:04.139 "adrfam": "ipv4", 00:24:04.139 "trsvcid": "4420", 00:24:04.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.139 "prchk_reftag": false, 00:24:04.139 "prchk_guard": false, 00:24:04.139 "hdgst": false, 00:24:04.139 "ddgst": false, 00:24:04.139 "psk": "/tmp/tmp.j7IIFxUgrO" 00:24:04.139 } 00:24:04.139 } 00:24:04.139 Got JSON-RPC error response 00:24:04.139 GoRPCClient: error on JSON-RPC call 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 91999 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91999 ']' 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91999 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.139 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91999 00:24:04.397 killing process with pid 91999 00:24:04.397 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.397 00:24:04.397 Latency(us) 00:24:04.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.397 =================================================================================================================== 00:24:04.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:04.397 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:04.397 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:04.397 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91999' 00:24:04.397 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91999 00:24:04.397 00:43:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91999 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 91742 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 91742 ']' 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 91742 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91742 00:24:05.770 killing process with pid 91742 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91742' 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 91742 00:24:05.770 [2024-07-12 00:43:10.357175] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:05.770 00:43:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 91742 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92074 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92074 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92074 ']' 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.144 00:43:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.144 [2024-07-12 00:43:12.035165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:07.144 [2024-07-12 00:43:12.035367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.402 [2024-07-12 00:43:12.212735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.660 [2024-07-12 00:43:12.471559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.660 [2024-07-12 00:43:12.471699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.660 [2024-07-12 00:43:12.471718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.660 [2024-07-12 00:43:12.471733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.660 [2024-07-12 00:43:12.471745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.660 [2024-07-12 00:43:12.471787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.250 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:24:08.251 00:43:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.j7IIFxUgrO 00:24:08.251 00:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.517 [2024-07-12 00:43:13.238691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.517 00:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.775 00:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.032 [2024-07-12 00:43:13.774906] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.033 [2024-07-12 00:43:13.775202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.033 00:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:09.291 malloc0 00:24:09.291 00:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.549 00:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:24:09.808 [2024-07-12 00:43:14.590517] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:09.808 [2024-07-12 00:43:14.590579] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:09.808 [2024-07-12 00:43:14.590620] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:09.808 2024/07/12 00:43:14 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.j7IIFxUgrO], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:24:09.808 request: 00:24:09.808 { 00:24:09.808 "method": "nvmf_subsystem_add_host", 00:24:09.808 "params": { 00:24:09.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.808 "host": "nqn.2016-06.io.spdk:host1", 00:24:09.808 "psk": "/tmp/tmp.j7IIFxUgrO" 00:24:09.808 } 00:24:09.808 } 00:24:09.808 Got JSON-RPC error response 00:24:09.808 GoRPCClient: error on JSON-RPC call 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 92074 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92074 ']' 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92074 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:09.808 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92074 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.809 killing process with pid 92074 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92074' 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92074 00:24:09.809 00:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92074 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.j7IIFxUgrO 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92197 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92197 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92197 ']' 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.184 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.185 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.185 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.185 00:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.443 [2024-07-12 00:43:16.189427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:11.443 [2024-07-12 00:43:16.189608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.443 [2024-07-12 00:43:16.373553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.011 [2024-07-12 00:43:16.678535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.011 [2024-07-12 00:43:16.678697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.011 [2024-07-12 00:43:16.678720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.011 [2024-07-12 00:43:16.678739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.011 [2024-07-12 00:43:16.678753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.011 [2024-07-12 00:43:16.678802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.j7IIFxUgrO 00:24:12.577 00:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:12.577 [2024-07-12 00:43:17.501423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.837 00:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:13.096 00:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:13.096 [2024-07-12 00:43:18.025687] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.096 [2024-07-12 00:43:18.025997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.355 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:13.613 malloc0 00:24:13.613 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:13.872 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:24:13.872 [2024-07-12 00:43:18.792309] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=92302 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 92302 /var/tmp/bdevperf.sock 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92302 ']' 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.155 00:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.155 [2024-07-12 00:43:18.903180] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:14.155 [2024-07-12 00:43:18.903343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92302 ] 00:24:14.155 [2024-07-12 00:43:19.070656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.414 [2024-07-12 00:43:19.344597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.982 00:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.982 00:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:14.982 00:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:24:15.242 [2024-07-12 00:43:20.007682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.242 [2024-07-12 00:43:20.007860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:15.242 TLSTESTn1 00:24:15.242 00:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:15.810 00:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:15.810 "subsystems": [ 00:24:15.810 { 00:24:15.810 "subsystem": "keyring", 00:24:15.810 "config": [] 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "subsystem": "iobuf", 00:24:15.810 "config": [ 00:24:15.810 { 00:24:15.810 "method": "iobuf_set_options", 00:24:15.810 "params": { 00:24:15.810 "large_bufsize": 135168, 00:24:15.810 "large_pool_count": 1024, 00:24:15.810 "small_bufsize": 8192, 00:24:15.810 "small_pool_count": 8192 00:24:15.810 } 00:24:15.810 } 00:24:15.810 ] 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "subsystem": "sock", 00:24:15.810 "config": [ 00:24:15.810 { 00:24:15.810 "method": "sock_set_default_impl", 00:24:15.810 "params": { 00:24:15.810 "impl_name": "posix" 00:24:15.810 } 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "method": "sock_impl_set_options", 00:24:15.810 "params": { 00:24:15.810 "enable_ktls": false, 00:24:15.810 "enable_placement_id": 0, 00:24:15.810 "enable_quickack": false, 00:24:15.810 "enable_recv_pipe": true, 00:24:15.810 "enable_zerocopy_send_client": false, 00:24:15.810 "enable_zerocopy_send_server": true, 00:24:15.810 "impl_name": "ssl", 00:24:15.810 "recv_buf_size": 4096, 00:24:15.810 "send_buf_size": 4096, 00:24:15.810 "tls_version": 0, 00:24:15.810 "zerocopy_threshold": 0 00:24:15.810 } 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "method": "sock_impl_set_options", 00:24:15.810 "params": { 00:24:15.810 "enable_ktls": false, 00:24:15.810 "enable_placement_id": 0, 00:24:15.810 "enable_quickack": false, 00:24:15.810 "enable_recv_pipe": true, 00:24:15.810 "enable_zerocopy_send_client": false, 00:24:15.810 "enable_zerocopy_send_server": true, 00:24:15.810 "impl_name": "posix", 00:24:15.810 "recv_buf_size": 2097152, 00:24:15.810 "send_buf_size": 2097152, 00:24:15.810 "tls_version": 0, 00:24:15.810 "zerocopy_threshold": 0 00:24:15.810 } 00:24:15.810 } 00:24:15.810 ] 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "subsystem": "vmd", 00:24:15.810 "config": [] 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "subsystem": "accel", 00:24:15.810 "config": [ 00:24:15.810 { 00:24:15.810 "method": "accel_set_options", 00:24:15.810 "params": { 00:24:15.810 "buf_count": 2048, 00:24:15.810 "large_cache_size": 16, 00:24:15.810 "sequence_count": 2048, 00:24:15.810 "small_cache_size": 128, 00:24:15.810 "task_count": 2048 00:24:15.810 } 00:24:15.810 } 00:24:15.810 ] 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "subsystem": "bdev", 00:24:15.810 "config": [ 00:24:15.810 { 00:24:15.810 "method": "bdev_set_options", 00:24:15.810 "params": { 00:24:15.810 "bdev_auto_examine": true, 00:24:15.810 "bdev_io_cache_size": 256, 00:24:15.810 "bdev_io_pool_size": 65535, 00:24:15.810 "iobuf_large_cache_size": 16, 00:24:15.810 "iobuf_small_cache_size": 128 00:24:15.810 } 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "method": "bdev_raid_set_options", 00:24:15.810 "params": { 00:24:15.810 "process_window_size_kb": 1024 00:24:15.810 } 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "method": "bdev_iscsi_set_options", 00:24:15.810 "params": { 00:24:15.810 "timeout_sec": 30 00:24:15.810 } 00:24:15.810 }, 00:24:15.810 { 00:24:15.810 "method": "bdev_nvme_set_options", 00:24:15.810 "params": { 00:24:15.810 "action_on_timeout": "none", 00:24:15.810 "allow_accel_sequence": false, 00:24:15.810 "arbitration_burst": 0, 00:24:15.810 "bdev_retry_count": 3, 00:24:15.810 "ctrlr_loss_timeout_sec": 0, 00:24:15.810 "delay_cmd_submit": true, 00:24:15.810 "dhchap_dhgroups": [ 00:24:15.810 "null", 00:24:15.810 "ffdhe2048", 00:24:15.810 "ffdhe3072", 00:24:15.810 "ffdhe4096", 00:24:15.810 "ffdhe6144", 00:24:15.810 "ffdhe8192" 00:24:15.810 ], 00:24:15.810 "dhchap_digests": [ 00:24:15.811 "sha256", 00:24:15.811 "sha384", 00:24:15.811 "sha512" 00:24:15.811 ], 00:24:15.811 "disable_auto_failback": false, 00:24:15.811 "fast_io_fail_timeout_sec": 0, 00:24:15.811 "generate_uuids": false, 00:24:15.811 "high_priority_weight": 0, 00:24:15.811 "io_path_stat": false, 00:24:15.811 "io_queue_requests": 0, 00:24:15.811 "keep_alive_timeout_ms": 10000, 00:24:15.811 "low_priority_weight": 0, 00:24:15.811 "medium_priority_weight": 0, 00:24:15.811 "nvme_adminq_poll_period_us": 10000, 00:24:15.811 "nvme_error_stat": false, 00:24:15.811 "nvme_ioq_poll_period_us": 0, 00:24:15.811 "rdma_cm_event_timeout_ms": 0, 00:24:15.811 "rdma_max_cq_size": 0, 00:24:15.811 "rdma_srq_size": 0, 00:24:15.811 "reconnect_delay_sec": 0, 00:24:15.811 "timeout_admin_us": 0, 00:24:15.811 "timeout_us": 0, 00:24:15.811 "transport_ack_timeout": 0, 00:24:15.811 "transport_retry_count": 4, 00:24:15.811 "transport_tos": 0 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "bdev_nvme_set_hotplug", 00:24:15.811 "params": { 00:24:15.811 "enable": false, 00:24:15.811 "period_us": 100000 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "bdev_malloc_create", 00:24:15.811 "params": { 00:24:15.811 "block_size": 4096, 00:24:15.811 "name": "malloc0", 00:24:15.811 "num_blocks": 8192, 00:24:15.811 "optimal_io_boundary": 0, 00:24:15.811 "physical_block_size": 4096, 00:24:15.811 "uuid": "1413fc0e-7ebe-41c2-93cb-bb26e7afc658" 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "bdev_wait_for_examine" 00:24:15.811 } 00:24:15.811 ] 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "subsystem": "nbd", 00:24:15.811 "config": [] 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "subsystem": "scheduler", 00:24:15.811 "config": [ 00:24:15.811 { 00:24:15.811 "method": "framework_set_scheduler", 00:24:15.811 "params": { 00:24:15.811 "name": "static" 00:24:15.811 } 00:24:15.811 } 00:24:15.811 ] 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "subsystem": "nvmf", 00:24:15.811 "config": [ 00:24:15.811 { 00:24:15.811 "method": "nvmf_set_config", 00:24:15.811 "params": { 00:24:15.811 "admin_cmd_passthru": { 00:24:15.811 "identify_ctrlr": false 00:24:15.811 }, 00:24:15.811 "discovery_filter": "match_any" 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_set_max_subsystems", 00:24:15.811 "params": { 00:24:15.811 "max_subsystems": 1024 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_set_crdt", 00:24:15.811 "params": { 00:24:15.811 "crdt1": 0, 00:24:15.811 "crdt2": 0, 00:24:15.811 "crdt3": 0 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_create_transport", 00:24:15.811 "params": { 00:24:15.811 "abort_timeout_sec": 1, 00:24:15.811 "ack_timeout": 0, 00:24:15.811 "buf_cache_size": 4294967295, 00:24:15.811 "c2h_success": false, 00:24:15.811 "data_wr_pool_size": 0, 00:24:15.811 "dif_insert_or_strip": false, 00:24:15.811 "in_capsule_data_size": 4096, 00:24:15.811 "io_unit_size": 131072, 00:24:15.811 "max_aq_depth": 128, 00:24:15.811 "max_io_qpairs_per_ctrlr": 127, 00:24:15.811 "max_io_size": 131072, 00:24:15.811 "max_queue_depth": 128, 00:24:15.811 "num_shared_buffers": 511, 00:24:15.811 "sock_priority": 0, 00:24:15.811 "trtype": "TCP", 00:24:15.811 "zcopy": false 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_create_subsystem", 00:24:15.811 "params": { 00:24:15.811 "allow_any_host": false, 00:24:15.811 "ana_reporting": false, 00:24:15.811 "max_cntlid": 65519, 00:24:15.811 "max_namespaces": 10, 00:24:15.811 "min_cntlid": 1, 00:24:15.811 "model_number": "SPDK bdev Controller", 00:24:15.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.811 "serial_number": "SPDK00000000000001" 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_subsystem_add_host", 00:24:15.811 "params": { 00:24:15.811 "host": "nqn.2016-06.io.spdk:host1", 00:24:15.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.811 "psk": "/tmp/tmp.j7IIFxUgrO" 00:24:15.811 } 00:24:15.811 }, 00:24:15.811 { 00:24:15.811 "method": "nvmf_subsystem_add_ns", 00:24:15.811 "params": { 00:24:15.811 "namespace": { 00:24:15.811 "bdev_name": "malloc0", 00:24:15.811 "nguid": "1413FC0E7EBE41C293CBBB26E7AFC658", 00:24:15.811 "no_auto_visible": false, 00:24:15.811 "nsid": 1, 00:24:15.811 "uuid": "1413fc0e-7ebe-41c2-93cb-bb26e7afc658" 00:24:15.811 }, 00:24:15.811 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:15.812 } 00:24:15.812 }, 00:24:15.812 { 00:24:15.812 "method": "nvmf_subsystem_add_listener", 00:24:15.812 "params": { 00:24:15.812 "listen_address": { 00:24:15.812 "adrfam": "IPv4", 00:24:15.812 "traddr": "10.0.0.2", 00:24:15.812 "trsvcid": "4420", 00:24:15.812 "trtype": "TCP" 00:24:15.812 }, 00:24:15.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.812 "secure_channel": true 00:24:15.812 } 00:24:15.812 } 00:24:15.812 ] 00:24:15.812 } 00:24:15.812 ] 00:24:15.812 }' 00:24:15.812 00:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:16.071 00:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:16.071 "subsystems": [ 00:24:16.071 { 00:24:16.071 "subsystem": "keyring", 00:24:16.071 "config": [] 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "subsystem": "iobuf", 00:24:16.071 "config": [ 00:24:16.071 { 00:24:16.071 "method": "iobuf_set_options", 00:24:16.071 "params": { 00:24:16.071 "large_bufsize": 135168, 00:24:16.071 "large_pool_count": 1024, 00:24:16.071 "small_bufsize": 8192, 00:24:16.071 "small_pool_count": 8192 00:24:16.071 } 00:24:16.071 } 00:24:16.071 ] 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "subsystem": "sock", 00:24:16.071 "config": [ 00:24:16.071 { 00:24:16.071 "method": "sock_set_default_impl", 00:24:16.071 "params": { 00:24:16.071 "impl_name": "posix" 00:24:16.071 } 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "method": "sock_impl_set_options", 00:24:16.071 "params": { 00:24:16.071 "enable_ktls": false, 00:24:16.071 "enable_placement_id": 0, 00:24:16.071 "enable_quickack": false, 00:24:16.071 "enable_recv_pipe": true, 00:24:16.071 "enable_zerocopy_send_client": false, 00:24:16.071 "enable_zerocopy_send_server": true, 00:24:16.071 "impl_name": "ssl", 00:24:16.071 "recv_buf_size": 4096, 00:24:16.071 "send_buf_size": 4096, 00:24:16.071 "tls_version": 0, 00:24:16.071 "zerocopy_threshold": 0 00:24:16.071 } 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "method": "sock_impl_set_options", 00:24:16.071 "params": { 00:24:16.071 "enable_ktls": false, 00:24:16.071 "enable_placement_id": 0, 00:24:16.071 "enable_quickack": false, 00:24:16.071 "enable_recv_pipe": true, 00:24:16.071 "enable_zerocopy_send_client": false, 00:24:16.071 "enable_zerocopy_send_server": true, 00:24:16.071 "impl_name": "posix", 00:24:16.071 "recv_buf_size": 2097152, 00:24:16.071 "send_buf_size": 2097152, 00:24:16.071 "tls_version": 0, 00:24:16.071 "zerocopy_threshold": 0 00:24:16.071 } 00:24:16.071 } 00:24:16.071 ] 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "subsystem": "vmd", 00:24:16.071 "config": [] 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "subsystem": "accel", 00:24:16.071 "config": [ 00:24:16.071 { 00:24:16.071 "method": "accel_set_options", 00:24:16.071 "params": { 00:24:16.071 "buf_count": 2048, 00:24:16.071 "large_cache_size": 16, 00:24:16.071 "sequence_count": 2048, 00:24:16.071 "small_cache_size": 128, 00:24:16.071 "task_count": 2048 00:24:16.071 } 00:24:16.071 } 00:24:16.071 ] 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "subsystem": "bdev", 00:24:16.071 "config": [ 00:24:16.071 { 00:24:16.071 "method": "bdev_set_options", 00:24:16.071 "params": { 00:24:16.071 "bdev_auto_examine": true, 00:24:16.071 "bdev_io_cache_size": 256, 00:24:16.071 "bdev_io_pool_size": 65535, 00:24:16.071 "iobuf_large_cache_size": 16, 00:24:16.071 "iobuf_small_cache_size": 128 00:24:16.071 } 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "method": "bdev_raid_set_options", 00:24:16.071 "params": { 00:24:16.071 "process_window_size_kb": 1024 00:24:16.071 } 00:24:16.071 }, 00:24:16.071 { 00:24:16.071 "method": "bdev_iscsi_set_options", 00:24:16.071 "params": { 00:24:16.071 "timeout_sec": 30 00:24:16.071 } 00:24:16.071 }, 00:24:16.072 { 00:24:16.072 "method": "bdev_nvme_set_options", 00:24:16.072 "params": { 00:24:16.072 "action_on_timeout": "none", 00:24:16.072 "allow_accel_sequence": false, 00:24:16.072 "arbitration_burst": 0, 00:24:16.072 "bdev_retry_count": 3, 00:24:16.072 "ctrlr_loss_timeout_sec": 0, 00:24:16.072 "delay_cmd_submit": true, 00:24:16.072 "dhchap_dhgroups": [ 00:24:16.072 "null", 00:24:16.072 "ffdhe2048", 00:24:16.072 "ffdhe3072", 00:24:16.072 "ffdhe4096", 00:24:16.072 "ffdhe6144", 00:24:16.072 "ffdhe8192" 00:24:16.072 ], 00:24:16.072 "dhchap_digests": [ 00:24:16.072 "sha256", 00:24:16.072 "sha384", 00:24:16.072 "sha512" 00:24:16.072 ], 00:24:16.072 "disable_auto_failback": false, 00:24:16.072 "fast_io_fail_timeout_sec": 0, 00:24:16.072 "generate_uuids": false, 00:24:16.072 "high_priority_weight": 0, 00:24:16.072 "io_path_stat": false, 00:24:16.072 "io_queue_requests": 512, 00:24:16.072 "keep_alive_timeout_ms": 10000, 00:24:16.072 "low_priority_weight": 0, 00:24:16.072 "medium_priority_weight": 0, 00:24:16.072 "nvme_adminq_poll_period_us": 10000, 00:24:16.072 "nvme_error_stat": false, 00:24:16.072 "nvme_ioq_poll_period_us": 0, 00:24:16.072 "rdma_cm_event_timeout_ms": 0, 00:24:16.072 "rdma_max_cq_size": 0, 00:24:16.072 "rdma_srq_size": 0, 00:24:16.072 "reconnect_delay_sec": 0, 00:24:16.072 "timeout_admin_us": 0, 00:24:16.072 "timeout_us": 0, 00:24:16.072 "transport_ack_timeout": 0, 00:24:16.072 "transport_retry_count": 4, 00:24:16.072 "transport_tos": 0 00:24:16.072 } 00:24:16.072 }, 00:24:16.072 { 00:24:16.072 "method": "bdev_nvme_attach_controller", 00:24:16.072 "params": { 00:24:16.072 "adrfam": "IPv4", 00:24:16.072 "ctrlr_loss_timeout_sec": 0, 00:24:16.072 "ddgst": false, 00:24:16.072 "fast_io_fail_timeout_sec": 0, 00:24:16.072 "hdgst": false, 00:24:16.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.072 "name": "TLSTEST", 00:24:16.072 "prchk_guard": false, 00:24:16.072 "prchk_reftag": false, 00:24:16.072 "psk": "/tmp/tmp.j7IIFxUgrO", 00:24:16.072 "reconnect_delay_sec": 0, 00:24:16.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.072 "traddr": "10.0.0.2", 00:24:16.072 "trsvcid": "4420", 00:24:16.072 "trtype": "TCP" 00:24:16.072 } 00:24:16.072 }, 00:24:16.072 { 00:24:16.072 "method": "bdev_nvme_set_hotplug", 00:24:16.072 "params": { 00:24:16.072 "enable": false, 00:24:16.072 "period_us": 100000 00:24:16.072 } 00:24:16.072 }, 00:24:16.072 { 00:24:16.072 "method": "bdev_wait_for_examine" 00:24:16.072 } 00:24:16.072 ] 00:24:16.072 }, 00:24:16.072 { 00:24:16.072 "subsystem": "nbd", 00:24:16.072 "config": [] 00:24:16.072 } 00:24:16.072 ] 00:24:16.072 }' 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 92302 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92302 ']' 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92302 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92302 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:16.072 killing process with pid 92302 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92302' 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92302 00:24:16.072 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.072 00:24:16.072 Latency(us) 00:24:16.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.072 =================================================================================================================== 00:24:16.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.072 [2024-07-12 00:43:20.813127] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.072 00:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92302 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 92197 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92197 ']' 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92197 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92197 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92197' 00:24:17.449 killing process with pid 92197 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92197 00:24:17.449 [2024-07-12 00:43:22.091775] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:17.449 00:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92197 00:24:18.847 00:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:18.847 00:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.847 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:18.847 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.847 00:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:18.847 "subsystems": [ 00:24:18.847 { 00:24:18.847 "subsystem": "keyring", 00:24:18.847 "config": [] 00:24:18.847 }, 00:24:18.847 { 00:24:18.847 "subsystem": "iobuf", 00:24:18.847 "config": [ 00:24:18.847 { 00:24:18.847 "method": "iobuf_set_options", 00:24:18.847 "params": { 00:24:18.847 "large_bufsize": 135168, 00:24:18.847 "large_pool_count": 1024, 00:24:18.847 "small_bufsize": 8192, 00:24:18.847 "small_pool_count": 8192 00:24:18.847 } 00:24:18.847 } 00:24:18.847 ] 00:24:18.847 }, 00:24:18.847 { 00:24:18.847 "subsystem": "sock", 00:24:18.847 "config": [ 00:24:18.847 { 00:24:18.847 "method": "sock_set_default_impl", 00:24:18.847 "params": { 00:24:18.847 "impl_name": "posix" 00:24:18.847 } 00:24:18.847 }, 00:24:18.847 { 00:24:18.847 "method": "sock_impl_set_options", 00:24:18.847 "params": { 00:24:18.847 "enable_ktls": false, 00:24:18.847 "enable_placement_id": 0, 00:24:18.847 "enable_quickack": false, 00:24:18.847 "enable_recv_pipe": true, 00:24:18.847 "enable_zerocopy_send_client": false, 00:24:18.847 "enable_zerocopy_send_server": true, 00:24:18.847 "impl_name": "ssl", 00:24:18.847 "recv_buf_size": 4096, 00:24:18.847 "send_buf_size": 4096, 00:24:18.847 "tls_version": 0, 00:24:18.847 "zerocopy_threshold": 0 00:24:18.847 } 00:24:18.847 }, 00:24:18.847 { 00:24:18.847 "method": "sock_impl_set_options", 00:24:18.847 "params": { 00:24:18.847 "enable_ktls": false, 00:24:18.847 "enable_placement_id": 0, 00:24:18.847 "enable_quickack": false, 00:24:18.847 "enable_recv_pipe": true, 00:24:18.847 "enable_zerocopy_send_client": false, 00:24:18.847 "enable_zerocopy_send_server": true, 00:24:18.847 "impl_name": "posix", 00:24:18.847 "recv_buf_size": 2097152, 00:24:18.847 "send_buf_size": 2097152, 00:24:18.848 "tls_version": 0, 00:24:18.848 "zerocopy_threshold": 0 00:24:18.848 } 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "vmd", 00:24:18.848 "config": [] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "accel", 00:24:18.848 "config": [ 00:24:18.848 { 00:24:18.848 "method": "accel_set_options", 00:24:18.848 "params": { 00:24:18.848 "buf_count": 2048, 00:24:18.848 "large_cache_size": 16, 00:24:18.848 "sequence_count": 2048, 00:24:18.848 "small_cache_size": 128, 00:24:18.848 "task_count": 2048 00:24:18.848 } 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "bdev", 00:24:18.848 "config": [ 00:24:18.848 { 00:24:18.848 "method": "bdev_set_options", 00:24:18.848 "params": { 00:24:18.848 "bdev_auto_examine": true, 00:24:18.848 "bdev_io_cache_size": 256, 00:24:18.848 "bdev_io_pool_size": 65535, 00:24:18.848 "iobuf_large_cache_size": 16, 00:24:18.848 "iobuf_small_cache_size": 128 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_raid_set_options", 00:24:18.848 "params": { 00:24:18.848 "process_window_size_kb": 1024 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_iscsi_set_options", 00:24:18.848 "params": { 00:24:18.848 "timeout_sec": 30 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_nvme_set_options", 00:24:18.848 "params": { 00:24:18.848 "action_on_timeout": "none", 00:24:18.848 "allow_accel_sequence": false, 00:24:18.848 "arbitration_burst": 0, 00:24:18.848 "bdev_retry_count": 3, 00:24:18.848 "ctrlr_loss_timeout_sec": 0, 00:24:18.848 "delay_cmd_submit": true, 00:24:18.848 "dhchap_dhgroups": [ 00:24:18.848 "null", 00:24:18.848 "ffdhe2048", 00:24:18.848 "ffdhe3072", 00:24:18.848 "ffdhe4096", 00:24:18.848 "ffdhe6144", 00:24:18.848 "ffdhe8192" 00:24:18.848 ], 00:24:18.848 "dhchap_digests": [ 00:24:18.848 "sha256", 00:24:18.848 "sha384", 00:24:18.848 "sha512" 00:24:18.848 ], 00:24:18.848 "disable_auto_failback": false, 00:24:18.848 "fast_io_fail_timeout_sec": 0, 00:24:18.848 "generate_uuids": false, 00:24:18.848 "high_priority_weight": 0, 00:24:18.848 "io_path_stat": false, 00:24:18.848 "io_queue_requests": 0, 00:24:18.848 "keep_alive_timeout_ms": 10000, 00:24:18.848 "low_priority_weight": 0, 00:24:18.848 "medium_priority_weight": 0, 00:24:18.848 "nvme_adminq_poll_period_us": 10000, 00:24:18.848 "nvme_error_stat": false, 00:24:18.848 "nvme_ioq_poll_period_us": 0, 00:24:18.848 "rdma_cm_event_timeout_ms": 0, 00:24:18.848 "rdma_max_cq_size": 0, 00:24:18.848 "rdma_srq_size": 0, 00:24:18.848 "reconnect_delay_sec": 0, 00:24:18.848 "timeout_admin_us": 0, 00:24:18.848 "timeout_us": 0, 00:24:18.848 "transport_ack_timeout": 0, 00:24:18.848 "transport_retry_count": 4, 00:24:18.848 "transport_tos": 0 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_nvme_set_hotplug", 00:24:18.848 "params": { 00:24:18.848 "enable": false, 00:24:18.848 "period_us": 100000 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_malloc_create", 00:24:18.848 "params": { 00:24:18.848 "block_size": 4096, 00:24:18.848 "name": "malloc0", 00:24:18.848 "num_blocks": 8192, 00:24:18.848 "optimal_io_boundary": 0, 00:24:18.848 "physical_block_size": 4096, 00:24:18.848 "uuid": "1413fc0e-7ebe-41c2-93cb-bb26e7afc658" 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "bdev_wait_for_examine" 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "nbd", 00:24:18.848 "config": [] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "scheduler", 00:24:18.848 "config": [ 00:24:18.848 { 00:24:18.848 "method": "framework_set_scheduler", 00:24:18.848 "params": { 00:24:18.848 "name": "static" 00:24:18.848 } 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "subsystem": "nvmf", 00:24:18.848 "config": [ 00:24:18.848 { 00:24:18.848 "method": "nvmf_set_config", 00:24:18.848 "params": { 00:24:18.848 "admin_cmd_passthru": { 00:24:18.848 "identify_ctrlr": false 00:24:18.848 }, 00:24:18.848 "discovery_filter": "match_any" 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_set_max_subsystems", 00:24:18.848 "params": { 00:24:18.848 "max_subsystems": 1024 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_set_crdt", 00:24:18.848 "params": { 00:24:18.848 "crdt1": 0, 00:24:18.848 "crdt2": 0, 00:24:18.848 "crdt3": 0 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_create_transport", 00:24:18.848 "params": { 00:24:18.848 "abort_timeout_sec": 1, 00:24:18.848 "ack_timeout": 0, 00:24:18.848 "buf_cache_size": 4294967295, 00:24:18.848 "c2h_success": false, 00:24:18.848 "data_wr_pool_size": 0, 00:24:18.848 "dif_insert_or_strip": false, 00:24:18.848 "in_capsule_data_size": 4096, 00:24:18.848 "io_unit_size": 131072, 00:24:18.848 "max_aq_depth": 128, 00:24:18.848 "max_io_qpairs_per_ctrlr": 127, 00:24:18.848 "max_io_size": 131072, 00:24:18.848 "max_queue_depth": 128, 00:24:18.848 "num_shared_buffers": 511, 00:24:18.848 "sock_priority": 0, 00:24:18.848 "trtype": "TCP", 00:24:18.848 "zcopy": false 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_create_subsystem", 00:24:18.848 "params": { 00:24:18.848 "allow_any_host": false, 00:24:18.848 "ana_reporting": false, 00:24:18.848 "max_cntlid": 65519, 00:24:18.848 "max_namespaces": 10, 00:24:18.848 "min_cntlid": 1, 00:24:18.848 "model_number": "SPDK bdev Controller", 00:24:18.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.848 "serial_number": "SPDK00000000000001" 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_subsystem_add_host", 00:24:18.848 "params": { 00:24:18.848 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.848 "psk": "/tmp/tmp.j7IIFxUgrO" 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_subsystem_add_ns", 00:24:18.848 "params": { 00:24:18.848 "namespace": { 00:24:18.848 "bdev_name": "malloc0", 00:24:18.848 "nguid": "1413FC0E7EBE41C293CBBB26E7AFC658", 00:24:18.848 "no_auto_visible": false, 00:24:18.848 "nsid": 1, 00:24:18.848 "uuid": "1413fc0e-7ebe-41c2-93cb-bb26e7afc658" 00:24:18.848 }, 00:24:18.848 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:18.848 } 00:24:18.848 }, 00:24:18.848 { 00:24:18.848 "method": "nvmf_subsystem_add_listener", 00:24:18.848 "params": { 00:24:18.848 "listen_address": { 00:24:18.848 "adrfam": "IPv4", 00:24:18.848 "traddr": "10.0.0.2", 00:24:18.848 "trsvcid": "4420", 00:24:18.848 "trtype": "TCP" 00:24:18.848 }, 00:24:18.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.848 "secure_channel": true 00:24:18.848 } 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 } 00:24:18.848 ] 00:24:18.848 }' 00:24:18.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92404 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92404 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92404 ']' 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.848 00:43:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.849 [2024-07-12 00:43:23.593466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:18.849 [2024-07-12 00:43:23.593643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.849 [2024-07-12 00:43:23.761324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.108 [2024-07-12 00:43:24.013278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.108 [2024-07-12 00:43:24.013349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.108 [2024-07-12 00:43:24.013366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.108 [2024-07-12 00:43:24.013381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.108 [2024-07-12 00:43:24.013406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.108 [2024-07-12 00:43:24.013552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.675 [2024-07-12 00:43:24.517523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.675 [2024-07-12 00:43:24.533354] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:19.675 [2024-07-12 00:43:24.549367] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.675 [2024-07-12 00:43:24.549635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.675 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.675 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:19.675 00:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.675 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:19.675 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=92448 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 92448 /var/tmp/bdevperf.sock 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92448 ']' 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.934 00:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:19.934 "subsystems": [ 00:24:19.934 { 00:24:19.934 "subsystem": "keyring", 00:24:19.934 "config": [] 00:24:19.934 }, 00:24:19.934 { 00:24:19.934 "subsystem": "iobuf", 00:24:19.934 "config": [ 00:24:19.934 { 00:24:19.934 "method": "iobuf_set_options", 00:24:19.934 "params": { 00:24:19.934 "large_bufsize": 135168, 00:24:19.934 "large_pool_count": 1024, 00:24:19.934 "small_bufsize": 8192, 00:24:19.934 "small_pool_count": 8192 00:24:19.934 } 00:24:19.934 } 00:24:19.934 ] 00:24:19.934 }, 00:24:19.934 { 00:24:19.934 "subsystem": "sock", 00:24:19.934 "config": [ 00:24:19.934 { 00:24:19.934 "method": "sock_set_default_impl", 00:24:19.934 "params": { 00:24:19.934 "impl_name": "posix" 00:24:19.934 } 00:24:19.934 }, 00:24:19.934 { 00:24:19.934 "method": "sock_impl_set_options", 00:24:19.934 "params": { 00:24:19.934 "enable_ktls": false, 00:24:19.935 "enable_placement_id": 0, 00:24:19.935 "enable_quickack": false, 00:24:19.935 "enable_recv_pipe": true, 00:24:19.935 "enable_zerocopy_send_client": false, 00:24:19.935 "enable_zerocopy_send_server": true, 00:24:19.935 "impl_name": "ssl", 00:24:19.935 "recv_buf_size": 4096, 00:24:19.935 "send_buf_size": 4096, 00:24:19.935 "tls_version": 0, 00:24:19.935 "zerocopy_threshold": 0 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "sock_impl_set_options", 00:24:19.935 "params": { 00:24:19.935 "enable_ktls": false, 00:24:19.935 "enable_placement_id": 0, 00:24:19.935 "enable_quickack": false, 00:24:19.935 "enable_recv_pipe": true, 00:24:19.935 "enable_zerocopy_send_client": false, 00:24:19.935 "enable_zerocopy_send_server": true, 00:24:19.935 "impl_name": "posix", 00:24:19.935 "recv_buf_size": 2097152, 00:24:19.935 "send_buf_size": 2097152, 00:24:19.935 "tls_version": 0, 00:24:19.935 "zerocopy_threshold": 0 00:24:19.935 } 00:24:19.935 } 00:24:19.935 ] 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "subsystem": "vmd", 00:24:19.935 "config": [] 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "subsystem": "accel", 00:24:19.935 "config": [ 00:24:19.935 { 00:24:19.935 "method": "accel_set_options", 00:24:19.935 "params": { 00:24:19.935 "buf_count": 2048, 00:24:19.935 "large_cache_size": 16, 00:24:19.935 "sequence_count": 2048, 00:24:19.935 "small_cache_size": 128, 00:24:19.935 "task_count": 2048 00:24:19.935 } 00:24:19.935 } 00:24:19.935 ] 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "subsystem": "bdev", 00:24:19.935 "config": [ 00:24:19.935 { 00:24:19.935 "method": "bdev_set_options", 00:24:19.935 "params": { 00:24:19.935 "bdev_auto_examine": true, 00:24:19.935 "bdev_io_cache_size": 256, 00:24:19.935 "bdev_io_pool_size": 65535, 00:24:19.935 "iobuf_large_cache_size": 16, 00:24:19.935 "iobuf_small_cache_size": 128 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_raid_set_options", 00:24:19.935 "params": { 00:24:19.935 "process_window_size_kb": 1024 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_iscsi_set_options", 00:24:19.935 "params": { 00:24:19.935 "timeout_sec": 30 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_nvme_set_options", 00:24:19.935 "params": { 00:24:19.935 "action_on_timeout": "none", 00:24:19.935 "allow_accel_sequence": false, 00:24:19.935 "arbitration_burst": 0, 00:24:19.935 "bdev_retry_count": 3, 00:24:19.935 "ctrlr_loss_timeout_sec": 0, 00:24:19.935 "delay_cmd_submit": true, 00:24:19.935 "dhchap_dhgroups": [ 00:24:19.935 "null", 00:24:19.935 "ffdhe2048", 00:24:19.935 "ffdhe3072", 00:24:19.935 "ffdhe4096", 00:24:19.935 "ffdhe6144", 00:24:19.935 "ffdhe8192" 00:24:19.935 ], 00:24:19.935 "dhchap_digests": [ 00:24:19.935 "sha256", 00:24:19.935 "sha384", 00:24:19.935 "sha512" 00:24:19.935 ], 00:24:19.935 "disable_auto_failback": false, 00:24:19.935 "fast_io_fail_timeout_sec": 0, 00:24:19.935 "generate_uuids": false, 00:24:19.935 "high_priority_weight": 0, 00:24:19.935 "io_path_stat": false, 00:24:19.935 "io_queue_requests": 512, 00:24:19.935 "keep_alive_timeout_ms": 10000, 00:24:19.935 "low_priority_weight": 0, 00:24:19.935 "medium_priority_weight": 0, 00:24:19.935 "nvme_adminq_poll_period_us": 10000, 00:24:19.935 "nvme_error_stat": false, 00:24:19.935 "nvme_ioq_poll_period_us": 0, 00:24:19.935 "rdma_cm_event_timeout_ms": 0, 00:24:19.935 "rdma_max_cq_size": 0, 00:24:19.935 "rdma_srq_size": 0, 00:24:19.935 "reconnect_delay_sec": 0, 00:24:19.935 "timeout_admin_us": 0, 00:24:19.935 "timeout_us": 0, 00:24:19.935 "transport_ack_timeout": 0, 00:24:19.935 "transport_retry_count": 4, 00:24:19.935 "transport_tos": 0 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_nvme_attach_controller", 00:24:19.935 "params": { 00:24:19.935 "adrfam": "IPv4", 00:24:19.935 "ctrlr_loss_timeout_sec": 0, 00:24:19.935 "ddgst": false, 00:24:19.935 "fast_io_fail_timeout_sec": 0, 00:24:19.935 "hdgst": false, 00:24:19.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.935 "name": "TLSTEST", 00:24:19.935 "prchk_guard": false, 00:24:19.935 "prchk_reftag": false, 00:24:19.935 "psk": "/tmp/tmp.j7IIFxUgrO", 00:24:19.935 "reconnect_delay_sec": 0, 00:24:19.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.935 "traddr": "10.0.0.2", 00:24:19.935 "trsvcid": "4420", 00:24:19.935 "trtype": "TCP" 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_nvme_set_hotplug", 00:24:19.935 "params": { 00:24:19.935 "enable": false, 00:24:19.935 "period_us": 100000 00:24:19.935 } 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "method": "bdev_wait_for_examine" 00:24:19.935 } 00:24:19.935 ] 00:24:19.935 }, 00:24:19.935 { 00:24:19.935 "subsystem": "nbd", 00:24:19.935 "config": [] 00:24:19.935 } 00:24:19.935 ] 00:24:19.935 }' 00:24:19.935 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.935 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.935 00:43:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.935 [2024-07-12 00:43:24.731512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:19.935 [2024-07-12 00:43:24.731708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92448 ] 00:24:20.194 [2024-07-12 00:43:24.901499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.453 [2024-07-12 00:43:25.183146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.712 [2024-07-12 00:43:25.572174] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.712 [2024-07-12 00:43:25.572369] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:20.970 00:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.970 00:43:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:20.970 00:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.970 Running I/O for 10 seconds... 00:24:30.995 00:24:30.995 Latency(us) 00:24:30.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.995 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.995 Verification LBA range: start 0x0 length 0x2000 00:24:30.995 TLSTESTn1 : 10.03 2774.13 10.84 0.00 0.00 46017.69 8757.99 43134.60 00:24:30.995 =================================================================================================================== 00:24:30.995 Total : 2774.13 10.84 0.00 0.00 46017.69 8757.99 43134.60 00:24:30.995 0 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 92448 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92448 ']' 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92448 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92448 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:30.995 killing process with pid 92448 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92448' 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92448 00:24:30.995 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.995 00:24:30.995 Latency(us) 00:24:30.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.995 =================================================================================================================== 00:24:30.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.995 [2024-07-12 00:43:35.908996] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:30.995 00:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92448 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 92404 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92404 ']' 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92404 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92404 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.371 killing process with pid 92404 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92404' 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92404 00:24:32.371 [2024-07-12 00:43:37.193354] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:32.371 00:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92404 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92612 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92612 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92612 ']' 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.741 00:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.998 [2024-07-12 00:43:38.693658] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:33.998 [2024-07-12 00:43:38.693870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.998 [2024-07-12 00:43:38.867136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.257 [2024-07-12 00:43:39.167930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.257 [2024-07-12 00:43:39.168015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.257 [2024-07-12 00:43:39.168032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.257 [2024-07-12 00:43:39.168051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.257 [2024-07-12 00:43:39.168063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.257 [2024-07-12 00:43:39.168109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.j7IIFxUgrO 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.j7IIFxUgrO 00:24:34.822 00:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:35.081 [2024-07-12 00:43:39.913644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.081 00:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.339 00:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.598 [2024-07-12 00:43:40.441874] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.598 [2024-07-12 00:43:40.442244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.598 00:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.856 malloc0 00:24:36.113 00:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:36.370 00:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.j7IIFxUgrO 00:24:36.628 [2024-07-12 00:43:41.327463] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:36.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=92715 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 92715 /var/tmp/bdevperf.sock 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92715 ']' 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.629 00:43:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.629 [2024-07-12 00:43:41.447622] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:36.629 [2024-07-12 00:43:41.447801] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92715 ] 00:24:36.887 [2024-07-12 00:43:41.614598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.144 [2024-07-12 00:43:41.873949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.715 00:43:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.715 00:43:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:37.715 00:43:42 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j7IIFxUgrO 00:24:37.715 00:43:42 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:37.988 [2024-07-12 00:43:42.898702] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.251 nvme0n1 00:24:38.251 00:43:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.251 Running I/O for 1 seconds... 00:24:39.626 00:24:39.626 Latency(us) 00:24:39.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.626 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:39.626 Verification LBA range: start 0x0 length 0x2000 00:24:39.626 nvme0n1 : 1.03 2635.93 10.30 0.00 0.00 47643.96 4944.99 29312.47 00:24:39.626 =================================================================================================================== 00:24:39.626 Total : 2635.93 10.30 0.00 0.00 47643.96 4944.99 29312.47 00:24:39.626 0 00:24:39.626 00:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 92715 00:24:39.626 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92715 ']' 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92715 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92715 00:24:39.627 killing process with pid 92715 00:24:39.627 Received shutdown signal, test time was about 1.000000 seconds 00:24:39.627 00:24:39.627 Latency(us) 00:24:39.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.627 =================================================================================================================== 00:24:39.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92715' 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92715 00:24:39.627 00:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92715 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 92612 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92612 ']' 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92612 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92612 00:24:40.564 killing process with pid 92612 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92612' 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92612 00:24:40.564 [2024-07-12 00:43:45.450109] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:40.564 00:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92612 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92816 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92816 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92816 ']' 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.468 00:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.468 [2024-07-12 00:43:47.036179] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:42.468 [2024-07-12 00:43:47.036849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.468 [2024-07-12 00:43:47.217989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.726 [2024-07-12 00:43:47.480132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.726 [2024-07-12 00:43:47.480484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.726 [2024-07-12 00:43:47.480650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.726 [2024-07-12 00:43:47.480908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.726 [2024-07-12 00:43:47.480967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.726 [2024-07-12 00:43:47.481129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.293 00:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.293 00:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:43.293 00:43:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.293 00:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.293 00:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.293 [2024-07-12 00:43:48.032754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.293 malloc0 00:24:43.293 [2024-07-12 00:43:48.101503] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.293 [2024-07-12 00:43:48.102026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=92866 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 92866 /var/tmp/bdevperf.sock 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92866 ']' 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.293 00:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.552 [2024-07-12 00:43:48.230263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:43.552 [2024-07-12 00:43:48.230461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92866 ] 00:24:43.552 [2024-07-12 00:43:48.399288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.811 [2024-07-12 00:43:48.684308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.379 00:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.379 00:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:44.379 00:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.j7IIFxUgrO 00:24:44.637 00:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:44.896 [2024-07-12 00:43:49.744927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.896 nvme0n1 00:24:45.155 00:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.155 Running I/O for 1 seconds... 00:24:46.090 00:24:46.090 Latency(us) 00:24:46.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.090 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.090 Verification LBA range: start 0x0 length 0x2000 00:24:46.090 nvme0n1 : 1.05 2561.53 10.01 0.00 0.00 49158.63 10664.49 29669.93 00:24:46.090 =================================================================================================================== 00:24:46.090 Total : 2561.53 10.01 0.00 0.00 49158.63 10664.49 29669.93 00:24:46.090 0 00:24:46.090 00:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:46.090 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.090 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.348 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.348 00:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:46.348 "subsystems": [ 00:24:46.348 { 00:24:46.348 "subsystem": "keyring", 00:24:46.348 "config": [ 00:24:46.348 { 00:24:46.348 "method": "keyring_file_add_key", 00:24:46.348 "params": { 00:24:46.348 "name": "key0", 00:24:46.348 "path": "/tmp/tmp.j7IIFxUgrO" 00:24:46.348 } 00:24:46.348 } 00:24:46.348 ] 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "subsystem": "iobuf", 00:24:46.348 "config": [ 00:24:46.348 { 00:24:46.348 "method": "iobuf_set_options", 00:24:46.348 "params": { 00:24:46.348 "large_bufsize": 135168, 00:24:46.348 "large_pool_count": 1024, 00:24:46.348 "small_bufsize": 8192, 00:24:46.348 "small_pool_count": 8192 00:24:46.348 } 00:24:46.348 } 00:24:46.348 ] 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "subsystem": "sock", 00:24:46.348 "config": [ 00:24:46.348 { 00:24:46.348 "method": "sock_set_default_impl", 00:24:46.348 "params": { 00:24:46.348 "impl_name": "posix" 00:24:46.348 } 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "method": "sock_impl_set_options", 00:24:46.348 "params": { 00:24:46.348 "enable_ktls": false, 00:24:46.348 "enable_placement_id": 0, 00:24:46.348 "enable_quickack": false, 00:24:46.348 "enable_recv_pipe": true, 00:24:46.348 "enable_zerocopy_send_client": false, 00:24:46.348 "enable_zerocopy_send_server": true, 00:24:46.348 "impl_name": "ssl", 00:24:46.348 "recv_buf_size": 4096, 00:24:46.348 "send_buf_size": 4096, 00:24:46.348 "tls_version": 0, 00:24:46.348 "zerocopy_threshold": 0 00:24:46.348 } 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "method": "sock_impl_set_options", 00:24:46.348 "params": { 00:24:46.348 "enable_ktls": false, 00:24:46.348 "enable_placement_id": 0, 00:24:46.348 "enable_quickack": false, 00:24:46.348 "enable_recv_pipe": true, 00:24:46.348 "enable_zerocopy_send_client": false, 00:24:46.348 "enable_zerocopy_send_server": true, 00:24:46.348 "impl_name": "posix", 00:24:46.348 "recv_buf_size": 2097152, 00:24:46.348 "send_buf_size": 2097152, 00:24:46.348 "tls_version": 0, 00:24:46.348 "zerocopy_threshold": 0 00:24:46.348 } 00:24:46.348 } 00:24:46.348 ] 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "subsystem": "vmd", 00:24:46.348 "config": [] 00:24:46.348 }, 00:24:46.348 { 00:24:46.348 "subsystem": "accel", 00:24:46.348 "config": [ 00:24:46.348 { 00:24:46.348 "method": "accel_set_options", 00:24:46.348 "params": { 00:24:46.349 "buf_count": 2048, 00:24:46.349 "large_cache_size": 16, 00:24:46.349 "sequence_count": 2048, 00:24:46.349 "small_cache_size": 128, 00:24:46.349 "task_count": 2048 00:24:46.349 } 00:24:46.349 } 00:24:46.349 ] 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "subsystem": "bdev", 00:24:46.349 "config": [ 00:24:46.349 { 00:24:46.349 "method": "bdev_set_options", 00:24:46.349 "params": { 00:24:46.349 "bdev_auto_examine": true, 00:24:46.349 "bdev_io_cache_size": 256, 00:24:46.349 "bdev_io_pool_size": 65535, 00:24:46.349 "iobuf_large_cache_size": 16, 00:24:46.349 "iobuf_small_cache_size": 128 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_raid_set_options", 00:24:46.349 "params": { 00:24:46.349 "process_window_size_kb": 1024 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_iscsi_set_options", 00:24:46.349 "params": { 00:24:46.349 "timeout_sec": 30 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_nvme_set_options", 00:24:46.349 "params": { 00:24:46.349 "action_on_timeout": "none", 00:24:46.349 "allow_accel_sequence": false, 00:24:46.349 "arbitration_burst": 0, 00:24:46.349 "bdev_retry_count": 3, 00:24:46.349 "ctrlr_loss_timeout_sec": 0, 00:24:46.349 "delay_cmd_submit": true, 00:24:46.349 "dhchap_dhgroups": [ 00:24:46.349 "null", 00:24:46.349 "ffdhe2048", 00:24:46.349 "ffdhe3072", 00:24:46.349 "ffdhe4096", 00:24:46.349 "ffdhe6144", 00:24:46.349 "ffdhe8192" 00:24:46.349 ], 00:24:46.349 "dhchap_digests": [ 00:24:46.349 "sha256", 00:24:46.349 "sha384", 00:24:46.349 "sha512" 00:24:46.349 ], 00:24:46.349 "disable_auto_failback": false, 00:24:46.349 "fast_io_fail_timeout_sec": 0, 00:24:46.349 "generate_uuids": false, 00:24:46.349 "high_priority_weight": 0, 00:24:46.349 "io_path_stat": false, 00:24:46.349 "io_queue_requests": 0, 00:24:46.349 "keep_alive_timeout_ms": 10000, 00:24:46.349 "low_priority_weight": 0, 00:24:46.349 "medium_priority_weight": 0, 00:24:46.349 "nvme_adminq_poll_period_us": 10000, 00:24:46.349 "nvme_error_stat": false, 00:24:46.349 "nvme_ioq_poll_period_us": 0, 00:24:46.349 "rdma_cm_event_timeout_ms": 0, 00:24:46.349 "rdma_max_cq_size": 0, 00:24:46.349 "rdma_srq_size": 0, 00:24:46.349 "reconnect_delay_sec": 0, 00:24:46.349 "timeout_admin_us": 0, 00:24:46.349 "timeout_us": 0, 00:24:46.349 "transport_ack_timeout": 0, 00:24:46.349 "transport_retry_count": 4, 00:24:46.349 "transport_tos": 0 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_nvme_set_hotplug", 00:24:46.349 "params": { 00:24:46.349 "enable": false, 00:24:46.349 "period_us": 100000 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_malloc_create", 00:24:46.349 "params": { 00:24:46.349 "block_size": 4096, 00:24:46.349 "name": "malloc0", 00:24:46.349 "num_blocks": 8192, 00:24:46.349 "optimal_io_boundary": 0, 00:24:46.349 "physical_block_size": 4096, 00:24:46.349 "uuid": "7cf788f2-fa82-4124-9a04-7ac1cab35560" 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "bdev_wait_for_examine" 00:24:46.349 } 00:24:46.349 ] 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "subsystem": "nbd", 00:24:46.349 "config": [] 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "subsystem": "scheduler", 00:24:46.349 "config": [ 00:24:46.349 { 00:24:46.349 "method": "framework_set_scheduler", 00:24:46.349 "params": { 00:24:46.349 "name": "static" 00:24:46.349 } 00:24:46.349 } 00:24:46.349 ] 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "subsystem": "nvmf", 00:24:46.349 "config": [ 00:24:46.349 { 00:24:46.349 "method": "nvmf_set_config", 00:24:46.349 "params": { 00:24:46.349 "admin_cmd_passthru": { 00:24:46.349 "identify_ctrlr": false 00:24:46.349 }, 00:24:46.349 "discovery_filter": "match_any" 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_set_max_subsystems", 00:24:46.349 "params": { 00:24:46.349 "max_subsystems": 1024 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_set_crdt", 00:24:46.349 "params": { 00:24:46.349 "crdt1": 0, 00:24:46.349 "crdt2": 0, 00:24:46.349 "crdt3": 0 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_create_transport", 00:24:46.349 "params": { 00:24:46.349 "abort_timeout_sec": 1, 00:24:46.349 "ack_timeout": 0, 00:24:46.349 "buf_cache_size": 4294967295, 00:24:46.349 "c2h_success": false, 00:24:46.349 "data_wr_pool_size": 0, 00:24:46.349 "dif_insert_or_strip": false, 00:24:46.349 "in_capsule_data_size": 4096, 00:24:46.349 "io_unit_size": 131072, 00:24:46.349 "max_aq_depth": 128, 00:24:46.349 "max_io_qpairs_per_ctrlr": 127, 00:24:46.349 "max_io_size": 131072, 00:24:46.349 "max_queue_depth": 128, 00:24:46.349 "num_shared_buffers": 511, 00:24:46.349 "sock_priority": 0, 00:24:46.349 "trtype": "TCP", 00:24:46.349 "zcopy": false 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_create_subsystem", 00:24:46.349 "params": { 00:24:46.349 "allow_any_host": false, 00:24:46.349 "ana_reporting": false, 00:24:46.349 "max_cntlid": 65519, 00:24:46.349 "max_namespaces": 32, 00:24:46.349 "min_cntlid": 1, 00:24:46.349 "model_number": "SPDK bdev Controller", 00:24:46.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.349 "serial_number": "00000000000000000000" 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_subsystem_add_host", 00:24:46.349 "params": { 00:24:46.349 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.349 "psk": "key0" 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_subsystem_add_ns", 00:24:46.349 "params": { 00:24:46.349 "namespace": { 00:24:46.349 "bdev_name": "malloc0", 00:24:46.349 "nguid": "7CF788F2FA8241249A047AC1CAB35560", 00:24:46.349 "no_auto_visible": false, 00:24:46.349 "nsid": 1, 00:24:46.349 "uuid": "7cf788f2-fa82-4124-9a04-7ac1cab35560" 00:24:46.349 }, 00:24:46.349 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:46.349 } 00:24:46.349 }, 00:24:46.349 { 00:24:46.349 "method": "nvmf_subsystem_add_listener", 00:24:46.349 "params": { 00:24:46.349 "listen_address": { 00:24:46.349 "adrfam": "IPv4", 00:24:46.349 "traddr": "10.0.0.2", 00:24:46.349 "trsvcid": "4420", 00:24:46.349 "trtype": "TCP" 00:24:46.349 }, 00:24:46.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.349 "secure_channel": true 00:24:46.349 } 00:24:46.349 } 00:24:46.349 ] 00:24:46.349 } 00:24:46.349 ] 00:24:46.349 }' 00:24:46.349 00:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:46.607 00:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:46.607 "subsystems": [ 00:24:46.607 { 00:24:46.607 "subsystem": "keyring", 00:24:46.607 "config": [ 00:24:46.607 { 00:24:46.607 "method": "keyring_file_add_key", 00:24:46.607 "params": { 00:24:46.607 "name": "key0", 00:24:46.607 "path": "/tmp/tmp.j7IIFxUgrO" 00:24:46.607 } 00:24:46.607 } 00:24:46.607 ] 00:24:46.607 }, 00:24:46.607 { 00:24:46.607 "subsystem": "iobuf", 00:24:46.607 "config": [ 00:24:46.607 { 00:24:46.607 "method": "iobuf_set_options", 00:24:46.607 "params": { 00:24:46.607 "large_bufsize": 135168, 00:24:46.607 "large_pool_count": 1024, 00:24:46.607 "small_bufsize": 8192, 00:24:46.607 "small_pool_count": 8192 00:24:46.607 } 00:24:46.607 } 00:24:46.607 ] 00:24:46.607 }, 00:24:46.607 { 00:24:46.607 "subsystem": "sock", 00:24:46.607 "config": [ 00:24:46.607 { 00:24:46.607 "method": "sock_set_default_impl", 00:24:46.607 "params": { 00:24:46.607 "impl_name": "posix" 00:24:46.607 } 00:24:46.607 }, 00:24:46.607 { 00:24:46.607 "method": "sock_impl_set_options", 00:24:46.607 "params": { 00:24:46.607 "enable_ktls": false, 00:24:46.607 "enable_placement_id": 0, 00:24:46.607 "enable_quickack": false, 00:24:46.607 "enable_recv_pipe": true, 00:24:46.608 "enable_zerocopy_send_client": false, 00:24:46.608 "enable_zerocopy_send_server": true, 00:24:46.608 "impl_name": "ssl", 00:24:46.608 "recv_buf_size": 4096, 00:24:46.608 "send_buf_size": 4096, 00:24:46.608 "tls_version": 0, 00:24:46.608 "zerocopy_threshold": 0 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "sock_impl_set_options", 00:24:46.608 "params": { 00:24:46.608 "enable_ktls": false, 00:24:46.608 "enable_placement_id": 0, 00:24:46.608 "enable_quickack": false, 00:24:46.608 "enable_recv_pipe": true, 00:24:46.608 "enable_zerocopy_send_client": false, 00:24:46.608 "enable_zerocopy_send_server": true, 00:24:46.608 "impl_name": "posix", 00:24:46.608 "recv_buf_size": 2097152, 00:24:46.608 "send_buf_size": 2097152, 00:24:46.608 "tls_version": 0, 00:24:46.608 "zerocopy_threshold": 0 00:24:46.608 } 00:24:46.608 } 00:24:46.608 ] 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "subsystem": "vmd", 00:24:46.608 "config": [] 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "subsystem": "accel", 00:24:46.608 "config": [ 00:24:46.608 { 00:24:46.608 "method": "accel_set_options", 00:24:46.608 "params": { 00:24:46.608 "buf_count": 2048, 00:24:46.608 "large_cache_size": 16, 00:24:46.608 "sequence_count": 2048, 00:24:46.608 "small_cache_size": 128, 00:24:46.608 "task_count": 2048 00:24:46.608 } 00:24:46.608 } 00:24:46.608 ] 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "subsystem": "bdev", 00:24:46.608 "config": [ 00:24:46.608 { 00:24:46.608 "method": "bdev_set_options", 00:24:46.608 "params": { 00:24:46.608 "bdev_auto_examine": true, 00:24:46.608 "bdev_io_cache_size": 256, 00:24:46.608 "bdev_io_pool_size": 65535, 00:24:46.608 "iobuf_large_cache_size": 16, 00:24:46.608 "iobuf_small_cache_size": 128 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_raid_set_options", 00:24:46.608 "params": { 00:24:46.608 "process_window_size_kb": 1024 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_iscsi_set_options", 00:24:46.608 "params": { 00:24:46.608 "timeout_sec": 30 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_nvme_set_options", 00:24:46.608 "params": { 00:24:46.608 "action_on_timeout": "none", 00:24:46.608 "allow_accel_sequence": false, 00:24:46.608 "arbitration_burst": 0, 00:24:46.608 "bdev_retry_count": 3, 00:24:46.608 "ctrlr_loss_timeout_sec": 0, 00:24:46.608 "delay_cmd_submit": true, 00:24:46.608 "dhchap_dhgroups": [ 00:24:46.608 "null", 00:24:46.608 "ffdhe2048", 00:24:46.608 "ffdhe3072", 00:24:46.608 "ffdhe4096", 00:24:46.608 "ffdhe6144", 00:24:46.608 "ffdhe8192" 00:24:46.608 ], 00:24:46.608 "dhchap_digests": [ 00:24:46.608 "sha256", 00:24:46.608 "sha384", 00:24:46.608 "sha512" 00:24:46.608 ], 00:24:46.608 "disable_auto_failback": false, 00:24:46.608 "fast_io_fail_timeout_sec": 0, 00:24:46.608 "generate_uuids": false, 00:24:46.608 "high_priority_weight": 0, 00:24:46.608 "io_path_stat": false, 00:24:46.608 "io_queue_requests": 512, 00:24:46.608 "keep_alive_timeout_ms": 10000, 00:24:46.608 "low_priority_weight": 0, 00:24:46.608 "medium_priority_weight": 0, 00:24:46.608 "nvme_adminq_poll_period_us": 10000, 00:24:46.608 "nvme_error_stat": false, 00:24:46.608 "nvme_ioq_poll_period_us": 0, 00:24:46.608 "rdma_cm_event_timeout_ms": 0, 00:24:46.608 "rdma_max_cq_size": 0, 00:24:46.608 "rdma_srq_size": 0, 00:24:46.608 "reconnect_delay_sec": 0, 00:24:46.608 "timeout_admin_us": 0, 00:24:46.608 "timeout_us": 0, 00:24:46.608 "transport_ack_timeout": 0, 00:24:46.608 "transport_retry_count": 4, 00:24:46.608 "transport_tos": 0 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_nvme_attach_controller", 00:24:46.608 "params": { 00:24:46.608 "adrfam": "IPv4", 00:24:46.608 "ctrlr_loss_timeout_sec": 0, 00:24:46.608 "ddgst": false, 00:24:46.608 "fast_io_fail_timeout_sec": 0, 00:24:46.608 "hdgst": false, 00:24:46.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.608 "name": "nvme0", 00:24:46.608 "prchk_guard": false, 00:24:46.608 "prchk_reftag": false, 00:24:46.608 "psk": "key0", 00:24:46.608 "reconnect_delay_sec": 0, 00:24:46.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.608 "traddr": "10.0.0.2", 00:24:46.608 "trsvcid": "4420", 00:24:46.608 "trtype": "TCP" 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_nvme_set_hotplug", 00:24:46.608 "params": { 00:24:46.608 "enable": false, 00:24:46.608 "period_us": 100000 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_enable_histogram", 00:24:46.608 "params": { 00:24:46.608 "enable": true, 00:24:46.608 "name": "nvme0n1" 00:24:46.608 } 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "method": "bdev_wait_for_examine" 00:24:46.608 } 00:24:46.608 ] 00:24:46.608 }, 00:24:46.608 { 00:24:46.608 "subsystem": "nbd", 00:24:46.608 "config": [] 00:24:46.608 } 00:24:46.608 ] 00:24:46.608 }' 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 92866 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92866 ']' 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92866 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92866 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:46.608 killing process with pid 92866 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92866' 00:24:46.608 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.608 00:24:46.608 Latency(us) 00:24:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.608 =================================================================================================================== 00:24:46.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92866 00:24:46.608 00:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92866 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 92816 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92816 ']' 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92816 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92816 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.986 killing process with pid 92816 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92816' 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92816 00:24:47.986 00:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92816 00:24:49.370 00:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:49.370 00:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.370 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.370 00:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:49.370 "subsystems": [ 00:24:49.370 { 00:24:49.370 "subsystem": "keyring", 00:24:49.370 "config": [ 00:24:49.370 { 00:24:49.370 "method": "keyring_file_add_key", 00:24:49.370 "params": { 00:24:49.370 "name": "key0", 00:24:49.370 "path": "/tmp/tmp.j7IIFxUgrO" 00:24:49.370 } 00:24:49.370 } 00:24:49.370 ] 00:24:49.370 }, 00:24:49.370 { 00:24:49.370 "subsystem": "iobuf", 00:24:49.370 "config": [ 00:24:49.370 { 00:24:49.370 "method": "iobuf_set_options", 00:24:49.370 "params": { 00:24:49.370 "large_bufsize": 135168, 00:24:49.370 "large_pool_count": 1024, 00:24:49.370 "small_bufsize": 8192, 00:24:49.370 "small_pool_count": 8192 00:24:49.370 } 00:24:49.370 } 00:24:49.370 ] 00:24:49.370 }, 00:24:49.370 { 00:24:49.370 "subsystem": "sock", 00:24:49.370 "config": [ 00:24:49.370 { 00:24:49.370 "method": "sock_set_default_impl", 00:24:49.370 "params": { 00:24:49.370 "impl_name": "posix" 00:24:49.370 } 00:24:49.370 }, 00:24:49.370 { 00:24:49.370 "method": "sock_impl_set_options", 00:24:49.370 "params": { 00:24:49.370 "enable_ktls": false, 00:24:49.370 "enable_placement_id": 0, 00:24:49.370 "enable_quickack": false, 00:24:49.370 "enable_recv_pipe": true, 00:24:49.370 "enable_zerocopy_send_client": false, 00:24:49.370 "enable_zerocopy_send_server": true, 00:24:49.371 "impl_name": "ssl", 00:24:49.371 "recv_buf_size": 4096, 00:24:49.371 "send_buf_size": 4096, 00:24:49.371 "tls_version": 0, 00:24:49.371 "zerocopy_threshold": 0 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "sock_impl_set_options", 00:24:49.371 "params": { 00:24:49.371 "enable_ktls": false, 00:24:49.371 "enable_placement_id": 0, 00:24:49.371 "enable_quickack": false, 00:24:49.371 "enable_recv_pipe": true, 00:24:49.371 "enable_zerocopy_send_client": false, 00:24:49.371 "enable_zerocopy_send_server": true, 00:24:49.371 "impl_name": "posix", 00:24:49.371 "recv_buf_size": 2097152, 00:24:49.371 "send_buf_size": 2097152, 00:24:49.371 "tls_version": 0, 00:24:49.371 "zerocopy_threshold": 0 00:24:49.371 } 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "vmd", 00:24:49.371 "config": [] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "accel", 00:24:49.371 "config": [ 00:24:49.371 { 00:24:49.371 "method": "accel_set_options", 00:24:49.371 "params": { 00:24:49.371 "buf_count": 2048, 00:24:49.371 "large_cache_size": 16, 00:24:49.371 "sequence_count": 2048, 00:24:49.371 "small_cache_size": 128, 00:24:49.371 "task_count": 2048 00:24:49.371 } 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "bdev", 00:24:49.371 "config": [ 00:24:49.371 { 00:24:49.371 "method": "bdev_set_options", 00:24:49.371 "params": { 00:24:49.371 "bdev_auto_examine": true, 00:24:49.371 "bdev_io_cache_size": 256, 00:24:49.371 "bdev_io_pool_size": 65535, 00:24:49.371 "iobuf_large_cache_size": 16, 00:24:49.371 "iobuf_small_cache_size": 128 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_raid_set_options", 00:24:49.371 "params": { 00:24:49.371 "process_window_size_kb": 1024 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_iscsi_set_options", 00:24:49.371 "params": { 00:24:49.371 "timeout_sec": 30 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_nvme_set_options", 00:24:49.371 "params": { 00:24:49.371 "action_on_timeout": "none", 00:24:49.371 "allow_accel_sequence": false, 00:24:49.371 "arbitration_burst": 0, 00:24:49.371 "bdev_retry_count": 3, 00:24:49.371 "ctrlr_loss_timeout_sec": 0, 00:24:49.371 "delay_cmd_submit": true, 00:24:49.371 "dhchap_dhgroups": [ 00:24:49.371 "null", 00:24:49.371 "ffdhe2048", 00:24:49.371 "ffdhe3072", 00:24:49.371 "ffdhe4096", 00:24:49.371 "ffdhe6144", 00:24:49.371 "ffdhe8192" 00:24:49.371 ], 00:24:49.371 "dhchap_digests": [ 00:24:49.371 "sha256", 00:24:49.371 "sha384", 00:24:49.371 "sha512" 00:24:49.371 ], 00:24:49.371 "disable_auto_failback": false, 00:24:49.371 "fast_io_fail_timeout_sec": 0, 00:24:49.371 "generate_uuids": false, 00:24:49.371 "high_priority_weight": 0, 00:24:49.371 "io_path_stat": false, 00:24:49.371 "io_queue_requests": 0, 00:24:49.371 "keep_alive_timeout_ms": 10000, 00:24:49.371 "low_priority_weight": 0, 00:24:49.371 "medium_priority_weight": 0, 00:24:49.371 "nvme_adminq_poll_period_us": 10000, 00:24:49.371 "nvme_error_stat": false, 00:24:49.371 "nvme_ioq_poll_period_us": 0, 00:24:49.371 "rdma_cm_event_timeout_ms": 0, 00:24:49.371 "rdma_max_cq_size": 0, 00:24:49.371 "rdma_srq_size": 0, 00:24:49.371 "reconnect_delay_sec": 0, 00:24:49.371 "timeout_admin_us": 0, 00:24:49.371 "timeout_us": 0, 00:24:49.371 "transport_ack_timeout": 0, 00:24:49.371 "transport_retry_count": 4, 00:24:49.371 "transport_tos": 0 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_nvme_set_hotplug", 00:24:49.371 "params": { 00:24:49.371 "enable": false, 00:24:49.371 "period_us": 100000 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_malloc_create", 00:24:49.371 "params": { 00:24:49.371 "block_size": 4096, 00:24:49.371 "name": "malloc0", 00:24:49.371 "num_blocks": 8192, 00:24:49.371 "optimal_io_boundary": 0, 00:24:49.371 "physical_block_size": 4096, 00:24:49.371 "uuid": "7cf788f2-fa82-4124-9a04-7ac1cab35560" 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "bdev_wait_for_examine" 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "nbd", 00:24:49.371 "config": [] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "scheduler", 00:24:49.371 "config": [ 00:24:49.371 { 00:24:49.371 "method": "framework_set_scheduler", 00:24:49.371 "params": { 00:24:49.371 "name": "static" 00:24:49.371 } 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "subsystem": "nvmf", 00:24:49.371 "config": [ 00:24:49.371 { 00:24:49.371 "method": "nvmf_set_config", 00:24:49.371 "params": { 00:24:49.371 "admin_cmd_passthru": { 00:24:49.371 "identify_ctrlr": false 00:24:49.371 }, 00:24:49.371 "discovery_filter": "match_any" 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_set_max_subsystems", 00:24:49.371 "params": { 00:24:49.371 "max_subsystems": 1024 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_set_crdt", 00:24:49.371 "params": { 00:24:49.371 "crdt1": 0, 00:24:49.371 "crdt2": 0, 00:24:49.371 "crdt3": 0 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_create_transport", 00:24:49.371 "params": { 00:24:49.371 "abort_timeout_sec": 1, 00:24:49.371 "ack_timeout": 0, 00:24:49.371 "buf_cache_size": 4294967295, 00:24:49.371 "c2h_success": false, 00:24:49.371 "data_wr_pool_size": 0, 00:24:49.371 "dif_insert_or_strip": false, 00:24:49.371 "in_capsule_data_size": 4096, 00:24:49.371 "io_unit_size": 131072, 00:24:49.371 "max_aq_depth": 128, 00:24:49.371 "max_io_qpairs_per_ctrlr": 127, 00:24:49.371 "max_io_size": 131072, 00:24:49.371 "max_queue_depth": 128, 00:24:49.371 "num_shared_buffers": 511, 00:24:49.371 "sock_priority": 0, 00:24:49.371 "trtype": "TCP", 00:24:49.371 "zcopy": false 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_create_subsystem", 00:24:49.371 "params": { 00:24:49.371 "allow_any_host": false, 00:24:49.371 "ana_reporting": false, 00:24:49.371 "max_cntlid": 65519, 00:24:49.371 "max_namespaces": 32, 00:24:49.371 "min_cntlid": 1, 00:24:49.371 "model_number": "SPDK bdev Controller", 00:24:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.371 "serial_number": "00000000000000000000" 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_subsystem_add_host", 00:24:49.371 "params": { 00:24:49.371 "host": "nqn.2016-06.io.spdk:host1", 00:24:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.371 "psk": "key0" 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_subsystem_add_ns", 00:24:49.371 "params": { 00:24:49.371 "namespace": { 00:24:49.371 "bdev_name": "malloc0", 00:24:49.371 "nguid": "7CF788F2FA8241249A047AC1CAB35560", 00:24:49.371 "no_auto_visible": false, 00:24:49.371 "nsid": 1, 00:24:49.371 "uuid": "7cf788f2-fa82-4124-9a04-7ac1cab35560" 00:24:49.371 }, 00:24:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:49.371 } 00:24:49.371 }, 00:24:49.371 { 00:24:49.371 "method": "nvmf_subsystem_add_listener", 00:24:49.371 "params": { 00:24:49.371 "listen_address": { 00:24:49.371 "adrfam": "IPv4", 00:24:49.371 "traddr": "10.0.0.2", 00:24:49.371 "trsvcid": "4420", 00:24:49.371 "trtype": "TCP" 00:24:49.371 }, 00:24:49.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.371 "secure_channel": true 00:24:49.371 } 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 } 00:24:49.371 ] 00:24:49.371 }' 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=92982 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 92982 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 92982 ']' 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.371 00:43:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.630 [2024-07-12 00:43:54.390174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:49.630 [2024-07-12 00:43:54.390411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.889 [2024-07-12 00:43:54.569230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.149 [2024-07-12 00:43:54.831288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.149 [2024-07-12 00:43:54.831376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.149 [2024-07-12 00:43:54.831408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.149 [2024-07-12 00:43:54.831426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.149 [2024-07-12 00:43:54.831438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.149 [2024-07-12 00:43:54.831602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.716 [2024-07-12 00:43:55.371326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.716 [2024-07-12 00:43:55.403217] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.716 [2024-07-12 00:43:55.403524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=93025 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 93025 /var/tmp/bdevperf.sock 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 93025 ']' 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:50.716 00:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:50.716 "subsystems": [ 00:24:50.716 { 00:24:50.716 "subsystem": "keyring", 00:24:50.716 "config": [ 00:24:50.716 { 00:24:50.716 "method": "keyring_file_add_key", 00:24:50.716 "params": { 00:24:50.716 "name": "key0", 00:24:50.716 "path": "/tmp/tmp.j7IIFxUgrO" 00:24:50.716 } 00:24:50.716 } 00:24:50.716 ] 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "subsystem": "iobuf", 00:24:50.716 "config": [ 00:24:50.716 { 00:24:50.716 "method": "iobuf_set_options", 00:24:50.716 "params": { 00:24:50.716 "large_bufsize": 135168, 00:24:50.716 "large_pool_count": 1024, 00:24:50.716 "small_bufsize": 8192, 00:24:50.716 "small_pool_count": 8192 00:24:50.716 } 00:24:50.716 } 00:24:50.716 ] 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "subsystem": "sock", 00:24:50.716 "config": [ 00:24:50.716 { 00:24:50.716 "method": "sock_set_default_impl", 00:24:50.716 "params": { 00:24:50.716 "impl_name": "posix" 00:24:50.716 } 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "method": "sock_impl_set_options", 00:24:50.716 "params": { 00:24:50.716 "enable_ktls": false, 00:24:50.716 "enable_placement_id": 0, 00:24:50.716 "enable_quickack": false, 00:24:50.716 "enable_recv_pipe": true, 00:24:50.716 "enable_zerocopy_send_client": false, 00:24:50.716 "enable_zerocopy_send_server": true, 00:24:50.716 "impl_name": "ssl", 00:24:50.716 "recv_buf_size": 4096, 00:24:50.716 "send_buf_size": 4096, 00:24:50.716 "tls_version": 0, 00:24:50.716 "zerocopy_threshold": 0 00:24:50.716 } 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "method": "sock_impl_set_options", 00:24:50.716 "params": { 00:24:50.716 "enable_ktls": false, 00:24:50.716 "enable_placement_id": 0, 00:24:50.716 "enable_quickack": false, 00:24:50.716 "enable_recv_pipe": true, 00:24:50.716 "enable_zerocopy_send_client": false, 00:24:50.716 "enable_zerocopy_send_server": true, 00:24:50.716 "impl_name": "posix", 00:24:50.716 "recv_buf_size": 2097152, 00:24:50.716 "send_buf_size": 2097152, 00:24:50.716 "tls_version": 0, 00:24:50.716 "zerocopy_threshold": 0 00:24:50.716 } 00:24:50.716 } 00:24:50.716 ] 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "subsystem": "vmd", 00:24:50.716 "config": [] 00:24:50.716 }, 00:24:50.716 { 00:24:50.716 "subsystem": "accel", 00:24:50.716 "config": [ 00:24:50.716 { 00:24:50.716 "method": "accel_set_options", 00:24:50.716 "params": { 00:24:50.716 "buf_count": 2048, 00:24:50.716 "large_cache_size": 16, 00:24:50.716 "sequence_count": 2048, 00:24:50.716 "small_cache_size": 128, 00:24:50.717 "task_count": 2048 00:24:50.717 } 00:24:50.717 } 00:24:50.717 ] 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "subsystem": "bdev", 00:24:50.717 "config": [ 00:24:50.717 { 00:24:50.717 "method": "bdev_set_options", 00:24:50.717 "params": { 00:24:50.717 "bdev_auto_examine": true, 00:24:50.717 "bdev_io_cache_size": 256, 00:24:50.717 "bdev_io_pool_size": 65535, 00:24:50.717 "iobuf_large_cache_size": 16, 00:24:50.717 "iobuf_small_cache_size": 128 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_raid_set_options", 00:24:50.717 "params": { 00:24:50.717 "process_window_size_kb": 1024 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_iscsi_set_options", 00:24:50.717 "params": { 00:24:50.717 "timeout_sec": 30 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_nvme_set_options", 00:24:50.717 "params": { 00:24:50.717 "action_on_timeout": "none", 00:24:50.717 "allow_accel_sequence": false, 00:24:50.717 "arbitration_burst": 0, 00:24:50.717 "bdev_retry_count": 3, 00:24:50.717 "ctrlr_loss_timeout_sec": 0, 00:24:50.717 "delay_cmd_submit": true, 00:24:50.717 "dhchap_dhgroups": [ 00:24:50.717 "null", 00:24:50.717 "ffdhe2048", 00:24:50.717 "ffdhe3072", 00:24:50.717 "ffdhe4096", 00:24:50.717 "ffdhe6144", 00:24:50.717 "ffdhe8192" 00:24:50.717 ], 00:24:50.717 "dhchap_digests": [ 00:24:50.717 "sha256", 00:24:50.717 "sha384", 00:24:50.717 "sha512" 00:24:50.717 ], 00:24:50.717 "disable_auto_failback": false, 00:24:50.717 "fast_io_fail_timeout_sec": 0, 00:24:50.717 "generate_uuids": false, 00:24:50.717 "high_priority_weight": 0, 00:24:50.717 "io_path_stat": false, 00:24:50.717 "io_queue_requests": 512, 00:24:50.717 "keep_alive_timeout_ms": 10000, 00:24:50.717 "low_priority_weight": 0, 00:24:50.717 "medium_priority_weight": 0, 00:24:50.717 "nvme_adminq_poll_period_us": 10000, 00:24:50.717 "nvme_error_stat": false, 00:24:50.717 "nvme_ioq_poll_period_us": 0, 00:24:50.717 "rdma_cm_event_timeout_ms": 0, 00:24:50.717 "rdma_max_cq_size": 0, 00:24:50.717 "rdma_srq_size": 0, 00:24:50.717 "reconnect_delay_sec": 0, 00:24:50.717 "timeout_admin_us": 0, 00:24:50.717 "timeout_us": 0, 00:24:50.717 "transport_ack_timeout": 0, 00:24:50.717 "transport_retry_count": 4, 00:24:50.717 "transport_tos": 0 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_nvme_attach_controller", 00:24:50.717 "params": { 00:24:50.717 "adrfam": "IPv4", 00:24:50.717 "ctrlr_loss_timeout_sec": 0, 00:24:50.717 "ddgst": false, 00:24:50.717 "fast_io_fail_timeout_sec": 0, 00:24:50.717 "hdgst": false, 00:24:50.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.717 "name": "nvme0", 00:24:50.717 "prchk_guard": false, 00:24:50.717 "prchk_reftag": false, 00:24:50.717 "psk": "key0", 00:24:50.717 "reconnect_delay_sec": 0, 00:24:50.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.717 "traddr": "10.0.0.2", 00:24:50.717 "trsvcid": "4420", 00:24:50.717 "trtype": "TCP" 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_nvme_set_hotplug", 00:24:50.717 "params": { 00:24:50.717 "enable": false, 00:24:50.717 "period_us": 100000 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_enable_histogram", 00:24:50.717 "params": { 00:24:50.717 "enable": true, 00:24:50.717 "name": "nvme0n1" 00:24:50.717 } 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "method": "bdev_wait_for_examine" 00:24:50.717 } 00:24:50.717 ] 00:24:50.717 }, 00:24:50.717 { 00:24:50.717 "subsystem": "nbd", 00:24:50.717 "config": [] 00:24:50.717 } 00:24:50.717 ] 00:24:50.717 }' 00:24:50.717 [2024-07-12 00:43:55.610256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:50.717 [2024-07-12 00:43:55.610471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93025 ] 00:24:50.976 [2024-07-12 00:43:55.789205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.234 [2024-07-12 00:43:56.071003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.801 [2024-07-12 00:43:56.481515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.801 00:43:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.801 00:43:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:51.801 00:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.801 00:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:52.058 00:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.059 00:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.329 Running I/O for 1 seconds... 00:24:53.311 00:24:53.311 Latency(us) 00:24:53.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:53.311 Verification LBA range: start 0x0 length 0x2000 00:24:53.311 nvme0n1 : 1.03 2604.05 10.17 0.00 0.00 48397.81 8996.31 37176.79 00:24:53.311 =================================================================================================================== 00:24:53.311 Total : 2604.05 10.17 0.00 0.00 48397.81 8996.31 37176.79 00:24:53.311 0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:53.311 nvmf_trace.0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 93025 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 93025 ']' 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 93025 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93025 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93025' 00:24:53.311 killing process with pid 93025 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 93025 00:24:53.311 Received shutdown signal, test time was about 1.000000 seconds 00:24:53.311 00:24:53.311 Latency(us) 00:24:53.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.311 =================================================================================================================== 00:24:53.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.311 00:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 93025 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.686 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.686 rmmod nvme_tcp 00:24:54.686 rmmod nvme_fabrics 00:24:54.686 rmmod nvme_keyring 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 92982 ']' 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 92982 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 92982 ']' 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 92982 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92982 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:54.687 killing process with pid 92982 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92982' 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 92982 00:24:54.687 00:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 92982 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.061 00:44:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.320 00:44:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:56.320 00:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FHpIGdeYIr /tmp/tmp.jkOLbLZgFa /tmp/tmp.j7IIFxUgrO 00:24:56.320 00:24:56.320 real 1m51.313s 00:24:56.320 user 2m57.069s 00:24:56.320 sys 0m29.490s 00:24:56.320 00:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.320 00:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.320 ************************************ 00:24:56.320 END TEST nvmf_tls 00:24:56.320 ************************************ 00:24:56.320 00:44:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.320 00:44:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:56.320 00:44:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.320 00:44:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.320 00:44:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.320 ************************************ 00:24:56.320 START TEST nvmf_fips 00:24:56.320 ************************************ 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:56.320 * Looking for test storage... 00:24:56.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.320 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:56.321 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:56.580 Error setting digest 00:24:56.580 0012A23A917F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:56.580 0012A23A917F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:56.580 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:56.581 Cannot find device "nvmf_tgt_br" 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.581 Cannot find device "nvmf_tgt_br2" 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:56.581 Cannot find device "nvmf_tgt_br" 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:56.581 Cannot find device "nvmf_tgt_br2" 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:56.581 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:56.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:56.839 00:24:56.839 --- 10.0.0.2 ping statistics --- 00:24:56.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.839 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:56.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:56.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:56.839 00:24:56.839 --- 10.0.0.3 ping statistics --- 00:24:56.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.839 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:56.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:56.839 00:24:56.839 --- 10.0.0.1 ping statistics --- 00:24:56.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.839 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=93332 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 93332 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 93332 ']' 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.839 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.840 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.840 00:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.097 [2024-07-12 00:44:01.865815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:57.097 [2024-07-12 00:44:01.865996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.356 [2024-07-12 00:44:02.045789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.615 [2024-07-12 00:44:02.353499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.615 [2024-07-12 00:44:02.353578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.615 [2024-07-12 00:44:02.353601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.615 [2024-07-12 00:44:02.353643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.615 [2024-07-12 00:44:02.353658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.615 [2024-07-12 00:44:02.353705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.873 00:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.874 00:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:57.874 00:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.874 00:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.874 00:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:58.180 00:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.180 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:58.180 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:58.181 00:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:58.181 [2024-07-12 00:44:03.052085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.181 [2024-07-12 00:44:03.068006] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.181 [2024-07-12 00:44:03.068271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.438 [2024-07-12 00:44:03.126735] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:58.438 malloc0 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=93389 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 93389 /var/tmp/bdevperf.sock 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 93389 ']' 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.438 00:44:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 [2024-07-12 00:44:03.311759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:58.438 [2024-07-12 00:44:03.312650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93389 ] 00:24:58.695 [2024-07-12 00:44:03.489097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.954 [2024-07-12 00:44:03.735737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.520 00:44:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.520 00:44:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:59.520 00:44:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:59.779 [2024-07-12 00:44:04.491907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.779 [2024-07-12 00:44:04.492086] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:59.779 TLSTESTn1 00:24:59.779 00:44:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.037 Running I/O for 10 seconds... 00:25:10.009 00:25:10.009 Latency(us) 00:25:10.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.009 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:10.009 Verification LBA range: start 0x0 length 0x2000 00:25:10.009 TLSTESTn1 : 10.04 2647.36 10.34 0.00 0.00 48226.92 11736.90 40513.16 00:25:10.009 =================================================================================================================== 00:25:10.009 Total : 2647.36 10.34 0.00 0.00 48226.92 11736.90 40513.16 00:25:10.009 0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:10.009 nvmf_trace.0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 93389 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 93389 ']' 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 93389 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93389 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:10.009 killing process with pid 93389 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93389' 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 93389 00:25:10.009 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.009 00:25:10.009 Latency(us) 00:25:10.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.009 =================================================================================================================== 00:25:10.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.009 [2024-07-12 00:44:14.911791] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:10.009 00:44:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 93389 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.382 rmmod nvme_tcp 00:25:11.382 rmmod nvme_fabrics 00:25:11.382 rmmod nvme_keyring 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 93332 ']' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 93332 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 93332 ']' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 93332 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93332 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:11.382 killing process with pid 93332 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93332' 00:25:11.382 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 93332 00:25:11.382 [2024-07-12 00:44:16.300135] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.383 00:44:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 93332 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:25:12.756 ************************************ 00:25:12.756 END TEST nvmf_fips 00:25:12.756 ************************************ 00:25:12.756 00:25:12.756 real 0m16.611s 00:25:12.756 user 0m23.504s 00:25:12.756 sys 0m5.634s 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.756 00:44:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.014 00:44:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.014 00:44:17 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:13.014 00:44:17 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:13.014 00:44:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.014 00:44:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.014 00:44:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.014 ************************************ 00:25:13.014 START TEST nvmf_fuzz 00:25:13.014 ************************************ 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:13.014 * Looking for test storage... 00:25:13.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.014 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:13.015 Cannot find device "nvmf_tgt_br" 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.015 Cannot find device "nvmf_tgt_br2" 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:13.015 Cannot find device "nvmf_tgt_br" 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:13.015 Cannot find device "nvmf_tgt_br2" 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:13.015 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.273 00:44:17 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:13.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:25:13.273 00:25:13.273 --- 10.0.0.2 ping statistics --- 00:25:13.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.273 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:13.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:25:13.273 00:25:13.273 --- 10.0.0.3 ping statistics --- 00:25:13.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.273 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:13.273 00:25:13.273 --- 10.0.0.1 ping statistics --- 00:25:13.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.273 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=93758 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 93758 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 93758 ']' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.273 00:44:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 Malloc0 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:14.644 00:44:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:15.578 Shutting down the fuzz application 00:25:15.578 00:44:20 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:16.513 Shutting down the fuzz application 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.513 rmmod nvme_tcp 00:25:16.513 rmmod nvme_fabrics 00:25:16.513 rmmod nvme_keyring 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 93758 ']' 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 93758 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 93758 ']' 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 93758 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93758 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:16.513 killing process with pid 93758 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93758' 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 93758 00:25:16.513 00:44:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 93758 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.416 00:44:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:18.416 00:44:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:25:18.416 00:25:18.416 real 0m5.280s 00:25:18.416 user 0m6.355s 00:25:18.416 sys 0m0.924s 00:25:18.416 00:44:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:18.416 00:44:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 ************************************ 00:25:18.416 END TEST nvmf_fuzz 00:25:18.416 ************************************ 00:25:18.416 00:44:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:18.416 00:44:23 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:18.416 00:44:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:18.416 00:44:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.416 00:44:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.416 ************************************ 00:25:18.416 START TEST nvmf_multiconnection 00:25:18.416 ************************************ 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:18.416 * Looking for test storage... 00:25:18.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:18.416 Cannot find device "nvmf_tgt_br" 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:18.416 Cannot find device "nvmf_tgt_br2" 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:18.416 Cannot find device "nvmf_tgt_br" 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:18.416 Cannot find device "nvmf_tgt_br2" 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:18.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:25:18.416 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:18.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.417 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:25:18.417 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:18.417 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:18.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:25:18.675 00:25:18.675 --- 10.0.0.2 ping statistics --- 00:25:18.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.675 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:18.675 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:18.675 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:25:18.675 00:25:18.675 --- 10.0.0.3 ping statistics --- 00:25:18.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.675 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:18.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:18.675 00:25:18.675 --- 10.0.0.1 ping statistics --- 00:25:18.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.675 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=94032 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 94032 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 94032 ']' 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.675 00:44:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.934 [2024-07-12 00:44:23.712450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:18.934 [2024-07-12 00:44:23.712608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.192 [2024-07-12 00:44:23.893852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.450 [2024-07-12 00:44:24.210080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.450 [2024-07-12 00:44:24.210143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.451 [2024-07-12 00:44:24.210162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.451 [2024-07-12 00:44:24.210178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.451 [2024-07-12 00:44:24.210191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.451 [2024-07-12 00:44:24.210431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.451 [2024-07-12 00:44:24.211370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.451 [2024-07-12 00:44:24.211498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.451 [2024-07-12 00:44:24.211513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 [2024-07-12 00:44:24.719348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 Malloc1 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 [2024-07-12 00:44:24.847815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 Malloc2 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.018 00:44:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 Malloc3 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 Malloc4 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.277 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.278 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:20.278 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.278 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 Malloc5 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 Malloc6 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 Malloc7 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.536 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.794 Malloc8 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.794 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.795 Malloc9 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.795 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.052 Malloc10 00:25:21.052 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.052 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 Malloc11 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.053 00:44:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:21.311 00:44:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:21.311 00:44:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.311 00:44:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.311 00:44:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.311 00:44:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.210 00:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:23.469 00:44:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:23.469 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.469 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.469 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.469 00:44:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.370 00:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:25.628 00:44:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:25.628 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:25.628 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.628 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:25.628 00:44:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.555 00:44:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:27.814 00:44:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:27.814 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.814 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.814 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.814 00:44:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.718 00:44:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:29.977 00:44:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:29.977 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:29.977 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.977 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:29.977 00:44:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.886 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.886 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.886 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:32.152 00:44:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.684 00:44:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.684 00:44:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.684 00:44:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:34.684 00:44:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:36.588 00:44:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:38.491 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:38.491 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:38.491 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:38.750 00:44:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.281 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:41.282 00:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.182 00:44:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:43.182 00:44:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:43.182 00:44:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:43.182 00:44:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.182 00:44:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:43.182 00:44:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:45.725 00:44:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:45.725 [global] 00:25:45.725 thread=1 00:25:45.725 invalidate=1 00:25:45.725 rw=read 00:25:45.725 time_based=1 00:25:45.725 runtime=10 00:25:45.725 ioengine=libaio 00:25:45.725 direct=1 00:25:45.725 bs=262144 00:25:45.725 iodepth=64 00:25:45.725 norandommap=1 00:25:45.725 numjobs=1 00:25:45.725 00:25:45.725 [job0] 00:25:45.725 filename=/dev/nvme0n1 00:25:45.725 [job1] 00:25:45.725 filename=/dev/nvme10n1 00:25:45.725 [job2] 00:25:45.725 filename=/dev/nvme1n1 00:25:45.725 [job3] 00:25:45.725 filename=/dev/nvme2n1 00:25:45.725 [job4] 00:25:45.725 filename=/dev/nvme3n1 00:25:45.725 [job5] 00:25:45.725 filename=/dev/nvme4n1 00:25:45.725 [job6] 00:25:45.725 filename=/dev/nvme5n1 00:25:45.725 [job7] 00:25:45.725 filename=/dev/nvme6n1 00:25:45.725 [job8] 00:25:45.725 filename=/dev/nvme7n1 00:25:45.725 [job9] 00:25:45.725 filename=/dev/nvme8n1 00:25:45.725 [job10] 00:25:45.725 filename=/dev/nvme9n1 00:25:45.725 Could not set queue depth (nvme0n1) 00:25:45.725 Could not set queue depth (nvme10n1) 00:25:45.725 Could not set queue depth (nvme1n1) 00:25:45.725 Could not set queue depth (nvme2n1) 00:25:45.725 Could not set queue depth (nvme3n1) 00:25:45.725 Could not set queue depth (nvme4n1) 00:25:45.725 Could not set queue depth (nvme5n1) 00:25:45.725 Could not set queue depth (nvme6n1) 00:25:45.725 Could not set queue depth (nvme7n1) 00:25:45.725 Could not set queue depth (nvme8n1) 00:25:45.725 Could not set queue depth (nvme9n1) 00:25:45.725 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:45.725 fio-3.35 00:25:45.725 Starting 11 threads 00:25:57.984 00:25:57.984 job0: (groupid=0, jobs=1): err= 0: pid=94507: Fri Jul 12 00:45:00 2024 00:25:57.984 read: IOPS=298, BW=74.5MiB/s (78.1MB/s)(758MiB/10174msec) 00:25:57.984 slat (usec): min=20, max=117970, avg=3299.12, stdev=11143.45 00:25:57.984 clat (msec): min=26, max=419, avg=211.08, stdev=35.43 00:25:57.984 lat (msec): min=26, max=419, avg=214.38, stdev=37.28 00:25:57.984 clat percentiles (msec): 00:25:57.984 | 1.00th=[ 53], 5.00th=[ 171], 10.00th=[ 184], 20.00th=[ 194], 00:25:57.984 | 30.00th=[ 201], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 215], 00:25:57.984 | 70.00th=[ 220], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 262], 00:25:57.984 | 99.00th=[ 284], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:25:57.984 | 99.99th=[ 418] 00:25:57.984 bw ( KiB/s): min=54272, max=96256, per=5.37%, avg=75972.75, stdev=8865.24, samples=20 00:25:57.984 iops : min= 212, max= 376, avg=296.70, stdev=34.63, samples=20 00:25:57.984 lat (msec) : 50=0.89%, 100=1.35%, 250=87.34%, 500=10.42% 00:25:57.984 cpu : usr=0.13%, sys=1.25%, ctx=577, majf=0, minf=4097 00:25:57.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.984 issued rwts: total=3032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.984 job1: (groupid=0, jobs=1): err= 0: pid=94508: Fri Jul 12 00:45:00 2024 00:25:57.984 read: IOPS=538, BW=135MiB/s (141MB/s)(1354MiB/10053msec) 00:25:57.984 slat (usec): min=17, max=77243, avg=1842.30, stdev=6251.48 00:25:57.984 clat (msec): min=47, max=197, avg=116.83, stdev=24.87 00:25:57.984 lat (msec): min=50, max=200, avg=118.67, stdev=25.69 00:25:57.984 clat percentiles (msec): 00:25:57.984 | 1.00th=[ 64], 5.00th=[ 74], 10.00th=[ 81], 20.00th=[ 90], 00:25:57.984 | 30.00th=[ 101], 40.00th=[ 118], 50.00th=[ 126], 60.00th=[ 130], 00:25:57.984 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 150], 00:25:57.984 | 99.00th=[ 163], 99.50th=[ 163], 99.90th=[ 169], 99.95th=[ 176], 00:25:57.984 | 99.99th=[ 199] 00:25:57.984 bw ( KiB/s): min=99014, max=198656, per=9.69%, avg=137021.90, stdev=29518.70, samples=20 00:25:57.984 iops : min= 386, max= 776, avg=535.20, stdev=115.36, samples=20 00:25:57.984 lat (msec) : 50=0.02%, 100=29.34%, 250=70.64% 00:25:57.984 cpu : usr=0.20%, sys=2.04%, ctx=986, majf=0, minf=4097 00:25:57.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.984 issued rwts: total=5416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.984 job2: (groupid=0, jobs=1): err= 0: pid=94509: Fri Jul 12 00:45:00 2024 00:25:57.984 read: IOPS=326, BW=81.5MiB/s (85.5MB/s)(829MiB/10167msec) 00:25:57.984 slat (usec): min=10, max=134536, avg=2974.00, stdev=12372.23 00:25:57.984 clat (msec): min=37, max=345, avg=192.92, stdev=53.46 00:25:57.984 lat (msec): min=38, max=394, avg=195.90, stdev=55.44 00:25:57.984 clat percentiles (msec): 00:25:57.984 | 1.00th=[ 69], 5.00th=[ 92], 10.00th=[ 104], 20.00th=[ 138], 00:25:57.984 | 30.00th=[ 194], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 213], 00:25:57.984 | 70.00th=[ 220], 80.00th=[ 228], 90.00th=[ 247], 95.00th=[ 257], 00:25:57.984 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:25:57.984 | 99.99th=[ 347] 00:25:57.984 bw ( KiB/s): min=64512, max=164023, per=5.88%, avg=83175.75, stdev=25176.09, samples=20 00:25:57.984 iops : min= 252, max= 640, avg=324.80, stdev=98.23, samples=20 00:25:57.984 lat (msec) : 50=0.09%, 100=8.57%, 250=84.16%, 500=7.18% 00:25:57.984 cpu : usr=0.16%, sys=1.20%, ctx=562, majf=0, minf=4097 00:25:57.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.984 issued rwts: total=3315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.984 job3: (groupid=0, jobs=1): err= 0: pid=94510: Fri Jul 12 00:45:00 2024 00:25:57.984 read: IOPS=701, BW=175MiB/s (184MB/s)(1763MiB/10052msec) 00:25:57.984 slat (usec): min=19, max=137499, avg=1387.18, stdev=5440.48 00:25:57.984 clat (msec): min=7, max=296, avg=89.69, stdev=27.43 00:25:57.984 lat (msec): min=7, max=321, avg=91.08, stdev=28.06 00:25:57.984 clat percentiles (msec): 00:25:57.984 | 1.00th=[ 26], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 77], 00:25:57.984 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 91], 00:25:57.984 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 118], 00:25:57.984 | 99.00th=[ 215], 99.50th=[ 226], 99.90th=[ 232], 99.95th=[ 232], 00:25:57.984 | 99.99th=[ 296] 00:25:57.984 bw ( KiB/s): min=102400, max=208801, per=12.64%, avg=178848.00, stdev=24816.93, samples=20 00:25:57.984 iops : min= 400, max= 815, avg=698.45, stdev=96.97, samples=20 00:25:57.984 lat (msec) : 10=0.27%, 20=0.35%, 50=2.60%, 100=79.12%, 250=17.65% 00:25:57.984 lat (msec) : 500=0.01% 00:25:57.984 cpu : usr=0.20%, sys=2.24%, ctx=1142, majf=0, minf=4097 00:25:57.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:57.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.984 issued rwts: total=7050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.984 job4: (groupid=0, jobs=1): err= 0: pid=94511: Fri Jul 12 00:45:00 2024 00:25:57.984 read: IOPS=288, BW=72.0MiB/s (75.5MB/s)(732MiB/10166msec) 00:25:57.984 slat (usec): min=18, max=164935, avg=3410.32, stdev=12198.74 00:25:57.984 clat (msec): min=151, max=419, avg=218.36, stdev=27.80 00:25:57.984 lat (msec): min=154, max=419, avg=221.77, stdev=30.15 00:25:57.984 clat percentiles (msec): 00:25:57.984 | 1.00th=[ 165], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 201], 00:25:57.984 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 215], 60.00th=[ 220], 00:25:57.984 | 70.00th=[ 226], 80.00th=[ 234], 90.00th=[ 253], 95.00th=[ 264], 00:25:57.984 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 422], 99.95th=[ 422], 00:25:57.984 | 99.99th=[ 422] 00:25:57.985 bw ( KiB/s): min=58368, max=95552, per=5.18%, avg=73327.35, stdev=8907.99, samples=20 00:25:57.985 iops : min= 228, max= 373, avg=286.30, stdev=34.78, samples=20 00:25:57.985 lat (msec) : 250=87.88%, 500=12.12% 00:25:57.985 cpu : usr=0.15%, sys=1.22%, ctx=599, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=2928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job5: (groupid=0, jobs=1): err= 0: pid=94512: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=1175, BW=294MiB/s (308MB/s)(2948MiB/10029msec) 00:25:57.985 slat (usec): min=17, max=80226, avg=844.64, stdev=3608.01 00:25:57.985 clat (msec): min=14, max=166, avg=53.51, stdev=21.39 00:25:57.985 lat (msec): min=15, max=218, avg=54.35, stdev=21.75 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 41], 00:25:57.985 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 53], 00:25:57.985 | 70.00th=[ 56], 80.00th=[ 59], 90.00th=[ 68], 95.00th=[ 101], 00:25:57.985 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:25:57.985 | 99.99th=[ 167] 00:25:57.985 bw ( KiB/s): min=116224, max=355640, per=21.22%, avg=300250.15, stdev=75485.59, samples=20 00:25:57.985 iops : min= 454, max= 1389, avg=1172.80, stdev=294.84, samples=20 00:25:57.985 lat (msec) : 20=0.11%, 50=53.18%, 100=41.70%, 250=5.01% 00:25:57.985 cpu : usr=0.46%, sys=3.85%, ctx=2285, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=11793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job6: (groupid=0, jobs=1): err= 0: pid=94513: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=288, BW=72.2MiB/s (75.8MB/s)(735MiB/10174msec) 00:25:57.985 slat (usec): min=20, max=145109, avg=3400.67, stdev=13999.34 00:25:57.985 clat (msec): min=25, max=423, avg=217.63, stdev=36.60 00:25:57.985 lat (msec): min=25, max=423, avg=221.03, stdev=39.30 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 97], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 199], 00:25:57.985 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 220], 00:25:57.985 | 70.00th=[ 226], 80.00th=[ 236], 90.00th=[ 255], 95.00th=[ 271], 00:25:57.985 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 422], 00:25:57.985 | 99.99th=[ 422] 00:25:57.985 bw ( KiB/s): min=58368, max=91648, per=5.20%, avg=73618.15, stdev=10301.28, samples=20 00:25:57.985 iops : min= 228, max= 358, avg=287.50, stdev=40.29, samples=20 00:25:57.985 lat (msec) : 50=0.41%, 100=0.68%, 250=85.41%, 500=13.50% 00:25:57.985 cpu : usr=0.11%, sys=1.12%, ctx=527, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=2940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job7: (groupid=0, jobs=1): err= 0: pid=94514: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=679, BW=170MiB/s (178MB/s)(1708MiB/10051msec) 00:25:57.985 slat (usec): min=18, max=95424, avg=1451.65, stdev=5333.47 00:25:57.985 clat (msec): min=32, max=216, avg=92.47, stdev=18.37 00:25:57.985 lat (msec): min=33, max=232, avg=93.92, stdev=19.00 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 59], 5.00th=[ 69], 10.00th=[ 74], 20.00th=[ 81], 00:25:57.985 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 93], 00:25:57.985 | 70.00th=[ 97], 80.00th=[ 102], 90.00th=[ 111], 95.00th=[ 129], 00:25:57.985 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 194], 99.95th=[ 194], 00:25:57.985 | 99.99th=[ 218] 00:25:57.985 bw ( KiB/s): min=102092, max=201619, per=12.25%, avg=173301.15, stdev=23075.47, samples=20 00:25:57.985 iops : min= 398, max= 787, avg=676.75, stdev=90.31, samples=20 00:25:57.985 lat (msec) : 50=0.20%, 100=77.93%, 250=21.86% 00:25:57.985 cpu : usr=0.27%, sys=2.15%, ctx=1062, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=6833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job8: (groupid=0, jobs=1): err= 0: pid=94515: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=282, BW=70.7MiB/s (74.1MB/s)(719MiB/10173msec) 00:25:57.985 slat (usec): min=17, max=116192, avg=3478.08, stdev=10935.63 00:25:57.985 clat (msec): min=26, max=419, avg=222.44, stdev=34.90 00:25:57.985 lat (msec): min=27, max=419, avg=225.92, stdev=36.64 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 63], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 205], 00:25:57.985 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 224], 60.00th=[ 228], 00:25:57.985 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 259], 95.00th=[ 271], 00:25:57.985 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 418], 99.95th=[ 422], 00:25:57.985 | 99.99th=[ 422] 00:25:57.985 bw ( KiB/s): min=59904, max=88064, per=5.09%, avg=72011.95, stdev=6573.72, samples=20 00:25:57.985 iops : min= 234, max= 344, avg=281.20, stdev=25.71, samples=20 00:25:57.985 lat (msec) : 50=0.14%, 100=0.87%, 250=85.30%, 500=13.69% 00:25:57.985 cpu : usr=0.11%, sys=0.97%, ctx=632, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=2877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job9: (groupid=0, jobs=1): err= 0: pid=94516: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=445, BW=111MiB/s (117MB/s)(1133MiB/10175msec) 00:25:57.985 slat (usec): min=15, max=107583, avg=2183.70, stdev=7837.02 00:25:57.985 clat (msec): min=28, max=477, avg=141.20, stdev=49.78 00:25:57.985 lat (msec): min=29, max=477, avg=143.39, stdev=50.89 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 86], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 115], 00:25:57.985 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:25:57.985 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 249], 95.00th=[ 264], 00:25:57.985 | 99.00th=[ 288], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 384], 00:25:57.985 | 99.99th=[ 477] 00:25:57.985 bw ( KiB/s): min=56832, max=143872, per=8.08%, avg=114351.20, stdev=28600.87, samples=20 00:25:57.985 iops : min= 222, max= 562, avg=446.60, stdev=111.74, samples=20 00:25:57.985 lat (msec) : 50=0.18%, 100=2.80%, 250=87.71%, 500=9.31% 00:25:57.985 cpu : usr=0.25%, sys=1.80%, ctx=847, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=4531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 job10: (groupid=0, jobs=1): err= 0: pid=94517: Fri Jul 12 00:45:00 2024 00:25:57.985 read: IOPS=548, BW=137MiB/s (144MB/s)(1379MiB/10059msec) 00:25:57.985 slat (usec): min=20, max=84109, avg=1809.57, stdev=6203.63 00:25:57.985 clat (msec): min=21, max=187, avg=114.74, stdev=23.94 00:25:57.985 lat (msec): min=23, max=202, avg=116.55, stdev=24.76 00:25:57.985 clat percentiles (msec): 00:25:57.985 | 1.00th=[ 62], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 89], 00:25:57.985 | 30.00th=[ 104], 40.00th=[ 115], 50.00th=[ 121], 60.00th=[ 126], 00:25:57.985 | 70.00th=[ 130], 80.00th=[ 136], 90.00th=[ 142], 95.00th=[ 146], 00:25:57.985 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 178], 00:25:57.985 | 99.99th=[ 188] 00:25:57.985 bw ( KiB/s): min=111616, max=197632, per=9.86%, avg=139494.95, stdev=28066.91, samples=20 00:25:57.985 iops : min= 436, max= 772, avg=544.85, stdev=109.66, samples=20 00:25:57.985 lat (msec) : 50=0.05%, 100=28.98%, 250=70.96% 00:25:57.985 cpu : usr=0.26%, sys=2.09%, ctx=961, majf=0, minf=4097 00:25:57.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:57.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:57.985 issued rwts: total=5514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:57.985 00:25:57.985 Run status group 0 (all jobs): 00:25:57.985 READ: bw=1382MiB/s (1449MB/s), 70.7MiB/s-294MiB/s (74.1MB/s-308MB/s), io=13.7GiB (14.7GB), run=10029-10175msec 00:25:57.985 00:25:57.985 Disk stats (read/write): 00:25:57.985 nvme0n1: ios=5937/0, merge=0/0, ticks=1239478/0, in_queue=1239478, util=97.78% 00:25:57.985 nvme10n1: ios=10704/0, merge=0/0, ticks=1243474/0, in_queue=1243474, util=97.73% 00:25:57.985 nvme1n1: ios=6502/0, merge=0/0, ticks=1234454/0, in_queue=1234454, util=97.94% 00:25:57.985 nvme2n1: ios=13977/0, merge=0/0, ticks=1239881/0, in_queue=1239881, util=98.13% 00:25:57.985 nvme3n1: ios=5748/0, merge=0/0, ticks=1239152/0, in_queue=1239152, util=98.07% 00:25:57.985 nvme4n1: ios=23521/0, merge=0/0, ticks=1235534/0, in_queue=1235534, util=98.55% 00:25:57.985 nvme5n1: ios=5752/0, merge=0/0, ticks=1228550/0, in_queue=1228550, util=98.49% 00:25:57.985 nvme6n1: ios=13538/0, merge=0/0, ticks=1240973/0, in_queue=1240973, util=98.53% 00:25:57.985 nvme7n1: ios=5626/0, merge=0/0, ticks=1232989/0, in_queue=1232989, util=98.88% 00:25:57.985 nvme8n1: ios=8939/0, merge=0/0, ticks=1232690/0, in_queue=1232690, util=98.91% 00:25:57.985 nvme9n1: ios=10923/0, merge=0/0, ticks=1243485/0, in_queue=1243485, util=99.05% 00:25:57.985 00:45:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:57.985 [global] 00:25:57.985 thread=1 00:25:57.985 invalidate=1 00:25:57.985 rw=randwrite 00:25:57.985 time_based=1 00:25:57.985 runtime=10 00:25:57.985 ioengine=libaio 00:25:57.985 direct=1 00:25:57.985 bs=262144 00:25:57.985 iodepth=64 00:25:57.985 norandommap=1 00:25:57.985 numjobs=1 00:25:57.985 00:25:57.985 [job0] 00:25:57.985 filename=/dev/nvme0n1 00:25:57.985 [job1] 00:25:57.985 filename=/dev/nvme10n1 00:25:57.985 [job2] 00:25:57.985 filename=/dev/nvme1n1 00:25:57.985 [job3] 00:25:57.985 filename=/dev/nvme2n1 00:25:57.985 [job4] 00:25:57.985 filename=/dev/nvme3n1 00:25:57.986 [job5] 00:25:57.986 filename=/dev/nvme4n1 00:25:57.986 [job6] 00:25:57.986 filename=/dev/nvme5n1 00:25:57.986 [job7] 00:25:57.986 filename=/dev/nvme6n1 00:25:57.986 [job8] 00:25:57.986 filename=/dev/nvme7n1 00:25:57.986 [job9] 00:25:57.986 filename=/dev/nvme8n1 00:25:57.986 [job10] 00:25:57.986 filename=/dev/nvme9n1 00:25:57.986 Could not set queue depth (nvme0n1) 00:25:57.986 Could not set queue depth (nvme10n1) 00:25:57.986 Could not set queue depth (nvme1n1) 00:25:57.986 Could not set queue depth (nvme2n1) 00:25:57.986 Could not set queue depth (nvme3n1) 00:25:57.986 Could not set queue depth (nvme4n1) 00:25:57.986 Could not set queue depth (nvme5n1) 00:25:57.986 Could not set queue depth (nvme6n1) 00:25:57.986 Could not set queue depth (nvme7n1) 00:25:57.986 Could not set queue depth (nvme8n1) 00:25:57.986 Could not set queue depth (nvme9n1) 00:25:57.986 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:57.986 fio-3.35 00:25:57.986 Starting 11 threads 00:26:07.967 00:26:07.967 job0: (groupid=0, jobs=1): err= 0: pid=94712: Fri Jul 12 00:45:11 2024 00:26:07.967 write: IOPS=294, BW=73.5MiB/s (77.1MB/s)(753MiB/10236msec); 0 zone resets 00:26:07.967 slat (usec): min=24, max=23250, avg=3253.55, stdev=5757.61 00:26:07.967 clat (msec): min=22, max=502, avg=214.25, stdev=36.60 00:26:07.967 lat (msec): min=22, max=502, avg=217.50, stdev=36.70 00:26:07.967 clat percentiles (msec): 00:26:07.967 | 1.00th=[ 109], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 201], 00:26:07.967 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 213], 60.00th=[ 218], 00:26:07.967 | 70.00th=[ 224], 80.00th=[ 230], 90.00th=[ 234], 95.00th=[ 259], 00:26:07.967 | 99.00th=[ 368], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 502], 00:26:07.967 | 99.99th=[ 502] 00:26:07.967 bw ( KiB/s): min=59904, max=89088, per=6.82%, avg=75443.20, stdev=6472.67, samples=20 00:26:07.967 iops : min= 234, max= 348, avg=294.70, stdev=25.28, samples=20 00:26:07.967 lat (msec) : 50=0.40%, 100=0.53%, 250=92.72%, 500=6.28%, 750=0.07% 00:26:07.967 cpu : usr=0.64%, sys=1.02%, ctx=3677, majf=0, minf=1 00:26:07.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:07.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.967 issued rwts: total=0,3010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.967 job1: (groupid=0, jobs=1): err= 0: pid=94713: Fri Jul 12 00:45:11 2024 00:26:07.967 write: IOPS=617, BW=154MiB/s (162MB/s)(1557MiB/10090msec); 0 zone resets 00:26:07.967 slat (usec): min=19, max=51819, avg=1600.33, stdev=2773.08 00:26:07.967 clat (msec): min=59, max=185, avg=102.05, stdev= 7.18 00:26:07.967 lat (msec): min=59, max=185, avg=103.65, stdev= 6.75 00:26:07.967 clat percentiles (msec): 00:26:07.967 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:26:07.967 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:26:07.967 | 70.00th=[ 105], 80.00th=[ 107], 90.00th=[ 110], 95.00th=[ 111], 00:26:07.967 | 99.00th=[ 118], 99.50th=[ 138], 99.90th=[ 174], 99.95th=[ 180], 00:26:07.967 | 99.99th=[ 186] 00:26:07.967 bw ( KiB/s): min=143360, max=167936, per=14.26%, avg=157772.80, stdev=6504.56, samples=20 00:26:07.967 iops : min= 560, max= 656, avg=616.30, stdev=25.41, samples=20 00:26:07.967 lat (msec) : 100=41.34%, 250=58.66% 00:26:07.967 cpu : usr=1.21%, sys=1.92%, ctx=7470, majf=0, minf=1 00:26:07.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:07.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.967 issued rwts: total=0,6226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.967 job2: (groupid=0, jobs=1): err= 0: pid=94725: Fri Jul 12 00:45:11 2024 00:26:07.967 write: IOPS=632, BW=158MiB/s (166MB/s)(1601MiB/10128msec); 0 zone resets 00:26:07.967 slat (usec): min=16, max=31880, avg=1548.82, stdev=3111.55 00:26:07.967 clat (msec): min=6, max=263, avg=99.65, stdev=52.44 00:26:07.967 lat (msec): min=8, max=263, avg=101.20, stdev=53.17 00:26:07.967 clat percentiles (msec): 00:26:07.967 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 59], 20.00th=[ 62], 00:26:07.967 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:26:07.967 | 70.00th=[ 122], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 194], 00:26:07.967 | 99.00th=[ 199], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 255], 00:26:07.967 | 99.99th=[ 264] 00:26:07.967 bw ( KiB/s): min=83968, max=269312, per=14.67%, avg=162282.95, stdev=79602.05, samples=20 00:26:07.967 iops : min= 328, max= 1052, avg=633.90, stdev=310.96, samples=20 00:26:07.967 lat (msec) : 10=0.03%, 20=0.06%, 50=0.25%, 100=64.14%, 250=35.43% 00:26:07.967 lat (msec) : 500=0.09% 00:26:07.967 cpu : usr=1.37%, sys=1.92%, ctx=7686, majf=0, minf=1 00:26:07.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:07.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.967 issued rwts: total=0,6402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.967 job3: (groupid=0, jobs=1): err= 0: pid=94726: Fri Jul 12 00:45:11 2024 00:26:07.967 write: IOPS=314, BW=78.6MiB/s (82.4MB/s)(805MiB/10240msec); 0 zone resets 00:26:07.967 slat (usec): min=22, max=28027, avg=3073.21, stdev=5535.40 00:26:07.967 clat (msec): min=8, max=497, avg=200.41, stdev=50.39 00:26:07.967 lat (msec): min=8, max=497, avg=203.48, stdev=50.83 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 63], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 182], 00:26:07.968 | 30.00th=[ 194], 40.00th=[ 201], 50.00th=[ 207], 60.00th=[ 215], 00:26:07.968 | 70.00th=[ 226], 80.00th=[ 232], 90.00th=[ 241], 95.00th=[ 259], 00:26:07.968 | 99.00th=[ 363], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 498], 00:26:07.968 | 99.99th=[ 498] 00:26:07.968 bw ( KiB/s): min=59904, max=136704, per=7.30%, avg=80768.00, stdev=18853.75, samples=20 00:26:07.968 iops : min= 234, max= 534, avg=315.50, stdev=73.65, samples=20 00:26:07.968 lat (msec) : 10=0.19%, 20=0.12%, 50=0.50%, 100=0.56%, 250=92.67% 00:26:07.968 lat (msec) : 500=5.97% 00:26:07.968 cpu : usr=0.66%, sys=1.07%, ctx=3550, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job4: (groupid=0, jobs=1): err= 0: pid=94727: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=301, BW=75.3MiB/s (78.9MB/s)(771MiB/10243msec); 0 zone resets 00:26:07.968 slat (usec): min=23, max=30422, avg=3238.45, stdev=5711.41 00:26:07.968 clat (msec): min=23, max=503, avg=209.20, stdev=41.17 00:26:07.968 lat (msec): min=23, max=503, avg=212.44, stdev=41.30 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 107], 5.00th=[ 144], 10.00th=[ 155], 20.00th=[ 192], 00:26:07.968 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 218], 00:26:07.968 | 70.00th=[ 226], 80.00th=[ 230], 90.00th=[ 239], 95.00th=[ 264], 00:26:07.968 | 99.00th=[ 368], 99.50th=[ 435], 99.90th=[ 489], 99.95th=[ 506], 00:26:07.968 | 99.99th=[ 506] 00:26:07.968 bw ( KiB/s): min=59904, max=108544, per=6.99%, avg=77315.45, stdev=10838.80, samples=20 00:26:07.968 iops : min= 234, max= 424, avg=301.95, stdev=42.36, samples=20 00:26:07.968 lat (msec) : 50=0.36%, 100=0.55%, 250=92.70%, 500=6.32%, 750=0.06% 00:26:07.968 cpu : usr=0.82%, sys=0.93%, ctx=3561, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job5: (groupid=0, jobs=1): err= 0: pid=94728: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=331, BW=82.8MiB/s (86.9MB/s)(839MiB/10127msec); 0 zone resets 00:26:07.968 slat (usec): min=19, max=35589, avg=2974.54, stdev=5190.41 00:26:07.968 clat (msec): min=37, max=268, avg=190.08, stdev=23.94 00:26:07.968 lat (msec): min=37, max=268, avg=193.05, stdev=23.79 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 117], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 182], 00:26:07.968 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:26:07.968 | 70.00th=[ 203], 80.00th=[ 207], 90.00th=[ 213], 95.00th=[ 218], 00:26:07.968 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 259], 99.95th=[ 271], 00:26:07.968 | 99.99th=[ 271] 00:26:07.968 bw ( KiB/s): min=75776, max=111104, per=7.62%, avg=84292.00, stdev=8773.95, samples=20 00:26:07.968 iops : min= 296, max= 434, avg=329.25, stdev=34.27, samples=20 00:26:07.968 lat (msec) : 50=0.24%, 100=0.60%, 250=98.87%, 500=0.30% 00:26:07.968 cpu : usr=0.72%, sys=0.95%, ctx=4562, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job6: (groupid=0, jobs=1): err= 0: pid=94729: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=332, BW=83.0MiB/s (87.1MB/s)(841MiB/10131msec); 0 zone resets 00:26:07.968 slat (usec): min=22, max=32294, avg=2966.74, stdev=5163.25 00:26:07.968 clat (msec): min=22, max=270, avg=189.64, stdev=24.42 00:26:07.968 lat (msec): min=22, max=270, avg=192.61, stdev=24.29 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 101], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 182], 00:26:07.968 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:26:07.968 | 70.00th=[ 203], 80.00th=[ 207], 90.00th=[ 213], 95.00th=[ 215], 00:26:07.968 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 264], 99.95th=[ 271], 00:26:07.968 | 99.99th=[ 271] 00:26:07.968 bw ( KiB/s): min=75776, max=111104, per=7.64%, avg=84523.20, stdev=8675.88, samples=20 00:26:07.968 iops : min= 296, max= 434, avg=330.15, stdev=33.90, samples=20 00:26:07.968 lat (msec) : 50=0.36%, 100=0.59%, 250=98.75%, 500=0.30% 00:26:07.968 cpu : usr=0.82%, sys=1.02%, ctx=3766, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job7: (groupid=0, jobs=1): err= 0: pid=94731: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=316, BW=79.2MiB/s (83.1MB/s)(811MiB/10236msec); 0 zone resets 00:26:07.968 slat (usec): min=24, max=18935, avg=2974.22, stdev=5320.04 00:26:07.968 clat (msec): min=13, max=502, avg=198.89, stdev=36.26 00:26:07.968 lat (msec): min=13, max=502, avg=201.87, stdev=36.35 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 87], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 186], 00:26:07.968 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 201], 00:26:07.968 | 70.00th=[ 205], 80.00th=[ 209], 90.00th=[ 215], 95.00th=[ 251], 00:26:07.968 | 99.00th=[ 368], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 502], 00:26:07.968 | 99.99th=[ 502] 00:26:07.968 bw ( KiB/s): min=59904, max=93883, per=7.36%, avg=81417.35, stdev=6944.77, samples=20 00:26:07.968 iops : min= 234, max= 366, avg=318.00, stdev=27.06, samples=20 00:26:07.968 lat (msec) : 20=0.12%, 50=0.31%, 100=0.77%, 250=93.68%, 500=5.06% 00:26:07.968 lat (msec) : 750=0.06% 00:26:07.968 cpu : usr=0.69%, sys=0.98%, ctx=3791, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job8: (groupid=0, jobs=1): err= 0: pid=94732: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=618, BW=155MiB/s (162MB/s)(1561MiB/10089msec); 0 zone resets 00:26:07.968 slat (usec): min=17, max=38547, avg=1595.83, stdev=2740.39 00:26:07.968 clat (msec): min=8, max=185, avg=101.78, stdev= 9.77 00:26:07.968 lat (msec): min=8, max=185, avg=103.37, stdev= 9.53 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 71], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:26:07.968 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:26:07.968 | 70.00th=[ 105], 80.00th=[ 107], 90.00th=[ 110], 95.00th=[ 111], 00:26:07.968 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 174], 99.95th=[ 180], 00:26:07.968 | 99.99th=[ 186] 00:26:07.968 bw ( KiB/s): min=149504, max=167936, per=14.30%, avg=158218.60, stdev=5564.33, samples=20 00:26:07.968 iops : min= 584, max= 656, avg=618.00, stdev=21.80, samples=20 00:26:07.968 lat (msec) : 10=0.06%, 20=0.13%, 50=0.58%, 100=40.04%, 250=59.19% 00:26:07.968 cpu : usr=1.36%, sys=1.64%, ctx=7249, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,6244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job9: (groupid=0, jobs=1): err= 0: pid=94733: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=299, BW=75.0MiB/s (78.6MB/s)(768MiB/10246msec); 0 zone resets 00:26:07.968 slat (usec): min=24, max=37958, avg=3252.07, stdev=5746.93 00:26:07.968 clat (msec): min=22, max=505, avg=210.08, stdev=39.83 00:26:07.968 lat (msec): min=22, max=505, avg=213.33, stdev=39.91 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 105], 5.00th=[ 146], 10.00th=[ 157], 20.00th=[ 197], 00:26:07.968 | 30.00th=[ 203], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 215], 00:26:07.968 | 70.00th=[ 224], 80.00th=[ 228], 90.00th=[ 232], 95.00th=[ 257], 00:26:07.968 | 99.00th=[ 372], 99.50th=[ 439], 99.90th=[ 489], 99.95th=[ 506], 00:26:07.968 | 99.99th=[ 506] 00:26:07.968 bw ( KiB/s): min=59904, max=108544, per=6.96%, avg=77015.20, stdev=9997.22, samples=20 00:26:07.968 iops : min= 234, max= 424, avg=300.80, stdev=39.06, samples=20 00:26:07.968 lat (msec) : 50=0.39%, 100=0.52%, 250=93.23%, 500=5.79%, 750=0.07% 00:26:07.968 cpu : usr=0.61%, sys=0.96%, ctx=2702, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 job10: (groupid=0, jobs=1): err= 0: pid=94734: Fri Jul 12 00:45:11 2024 00:26:07.968 write: IOPS=298, BW=74.6MiB/s (78.2MB/s)(762MiB/10219msec); 0 zone resets 00:26:07.968 slat (usec): min=20, max=47279, avg=3276.23, stdev=5824.39 00:26:07.968 clat (msec): min=41, max=488, avg=211.15, stdev=37.53 00:26:07.968 lat (msec): min=41, max=488, avg=214.43, stdev=37.57 00:26:07.968 clat percentiles (msec): 00:26:07.968 | 1.00th=[ 125], 5.00th=[ 153], 10.00th=[ 167], 20.00th=[ 194], 00:26:07.968 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 215], 00:26:07.968 | 70.00th=[ 228], 80.00th=[ 232], 90.00th=[ 243], 95.00th=[ 259], 00:26:07.968 | 99.00th=[ 355], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 489], 00:26:07.968 | 99.99th=[ 489] 00:26:07.968 bw ( KiB/s): min=57856, max=106496, per=6.91%, avg=76416.00, stdev=9693.01, samples=20 00:26:07.968 iops : min= 226, max= 416, avg=298.50, stdev=37.86, samples=20 00:26:07.968 lat (msec) : 50=0.13%, 100=0.52%, 250=93.47%, 500=5.87% 00:26:07.968 cpu : usr=0.69%, sys=0.92%, ctx=2955, majf=0, minf=1 00:26:07.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:07.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.968 issued rwts: total=0,3048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.968 00:26:07.968 Run status group 0 (all jobs): 00:26:07.968 WRITE: bw=1080MiB/s (1133MB/s), 73.5MiB/s-158MiB/s (77.1MB/s-166MB/s), io=10.8GiB (11.6GB), run=10089-10246msec 00:26:07.968 00:26:07.968 Disk stats (read/write): 00:26:07.969 nvme0n1: ios=49/5991, merge=0/0, ticks=53/1232809, in_queue=1232862, util=97.54% 00:26:07.969 nvme10n1: ios=49/12222, merge=0/0, ticks=66/1206636, in_queue=1206702, util=97.42% 00:26:07.969 nvme1n1: ios=28/12615, merge=0/0, ticks=38/1206513, in_queue=1206551, util=97.59% 00:26:07.969 nvme2n1: ios=0/6405, merge=0/0, ticks=0/1231965, in_queue=1231965, util=97.76% 00:26:07.969 nvme3n1: ios=15/6145, merge=0/0, ticks=15/1233059, in_queue=1233074, util=97.98% 00:26:07.969 nvme4n1: ios=4/6528, merge=0/0, ticks=171/1206576, in_queue=1206747, util=98.07% 00:26:07.969 nvme5n1: ios=0/6551, merge=0/0, ticks=0/1207716, in_queue=1207716, util=98.16% 00:26:07.969 nvme6n1: ios=0/6459, merge=0/0, ticks=0/1233848, in_queue=1233848, util=98.29% 00:26:07.969 nvme7n1: ios=0/12269, merge=0/0, ticks=0/1207922, in_queue=1207922, util=98.48% 00:26:07.969 nvme8n1: ios=0/6119, merge=0/0, ticks=0/1233050, in_queue=1233050, util=98.92% 00:26:07.969 nvme9n1: ios=0/6056, merge=0/0, ticks=0/1228707, in_queue=1228707, util=98.64% 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:07.969 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.969 00:45:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:08.228 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.228 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:08.229 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.229 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:08.487 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:08.487 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.487 rmmod nvme_tcp 00:26:08.487 rmmod nvme_fabrics 00:26:08.487 rmmod nvme_keyring 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 94032 ']' 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 94032 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 94032 ']' 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 94032 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:08.487 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94032 00:26:08.763 killing process with pid 94032 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94032' 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 94032 00:26:08.763 00:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 94032 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:12.077 00:26:12.077 real 0m53.588s 00:26:12.077 user 3m3.151s 00:26:12.077 sys 0m21.837s 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:12.077 00:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.077 ************************************ 00:26:12.077 END TEST nvmf_multiconnection 00:26:12.077 ************************************ 00:26:12.077 00:45:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:12.077 00:45:16 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:12.077 00:45:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:12.077 00:45:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.077 00:45:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.077 ************************************ 00:26:12.077 START TEST nvmf_initiator_timeout 00:26:12.077 ************************************ 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:12.077 * Looking for test storage... 00:26:12.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.077 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:12.078 Cannot find device "nvmf_tgt_br" 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.078 Cannot find device "nvmf_tgt_br2" 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:12.078 Cannot find device "nvmf_tgt_br" 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:12.078 Cannot find device "nvmf_tgt_br2" 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:12.078 00:45:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:12.078 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:12.078 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:12.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:26:12.337 00:26:12.337 --- 10.0.0.2 ping statistics --- 00:26:12.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.337 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:12.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:12.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:26:12.337 00:26:12.337 --- 10.0.0.3 ping statistics --- 00:26:12.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.337 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:12.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:26:12.337 00:26:12.337 --- 10.0.0.1 ping statistics --- 00:26:12.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.337 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=95131 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 95131 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 95131 ']' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.337 00:45:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.596 [2024-07-12 00:45:17.323778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:12.596 [2024-07-12 00:45:17.323960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.596 [2024-07-12 00:45:17.506363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.854 [2024-07-12 00:45:17.780752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.854 [2024-07-12 00:45:17.781123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.854 [2024-07-12 00:45:17.781234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.854 [2024-07-12 00:45:17.781339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.854 [2024-07-12 00:45:17.781454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.854 [2024-07-12 00:45:17.781768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.854 [2024-07-12 00:45:17.781886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.854 [2024-07-12 00:45:17.782627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.854 [2024-07-12 00:45:17.782636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.421 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 Malloc0 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 Delay0 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 [2024-07-12 00:45:18.457126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:13.680 [2024-07-12 00:45:18.489341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.680 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:13.939 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:13.939 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.939 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.939 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:13.939 00:45:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=95213 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:15.888 00:45:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:15.888 [global] 00:26:15.888 thread=1 00:26:15.888 invalidate=1 00:26:15.888 rw=write 00:26:15.888 time_based=1 00:26:15.888 runtime=60 00:26:15.888 ioengine=libaio 00:26:15.888 direct=1 00:26:15.888 bs=4096 00:26:15.888 iodepth=1 00:26:15.888 norandommap=0 00:26:15.888 numjobs=1 00:26:15.888 00:26:15.888 verify_dump=1 00:26:15.888 verify_backlog=512 00:26:15.888 verify_state_save=0 00:26:15.888 do_verify=1 00:26:15.888 verify=crc32c-intel 00:26:15.888 [job0] 00:26:15.888 filename=/dev/nvme0n1 00:26:15.888 Could not set queue depth (nvme0n1) 00:26:16.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:16.146 fio-3.35 00:26:16.146 Starting 1 thread 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.425 true 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.425 true 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.425 true 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.425 true 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.425 00:45:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:21.955 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:21.956 true 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:21.956 true 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:21.956 true 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:21.956 true 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:21.956 00:45:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 95213 00:27:18.206 00:27:18.206 job0: (groupid=0, jobs=1): err= 0: pid=95240: Fri Jul 12 00:46:20 2024 00:27:18.206 read: IOPS=600, BW=2404KiB/s (2462kB/s)(141MiB/60001msec) 00:27:18.206 slat (usec): min=12, max=130, avg=17.87, stdev= 5.35 00:27:18.206 clat (usec): min=210, max=679, avg=268.56, stdev=28.41 00:27:18.206 lat (usec): min=227, max=702, avg=286.42, stdev=29.85 00:27:18.206 clat percentiles (usec): 00:27:18.206 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:27:18.206 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:27:18.206 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 322], 00:27:18.206 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 429], 00:27:18.206 | 99.99th=[ 570] 00:27:18.206 write: IOPS=605, BW=2423KiB/s (2482kB/s)(142MiB/60001msec); 0 zone resets 00:27:18.206 slat (usec): min=17, max=11133, avg=27.47, stdev=68.86 00:27:18.206 clat (usec): min=38, max=40822k, avg=1334.80, stdev=214103.37 00:27:18.206 lat (usec): min=181, max=40822k, avg=1362.27, stdev=214103.38 00:27:18.206 clat percentiles (usec): 00:27:18.206 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:27:18.206 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:27:18.206 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 260], 00:27:18.206 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 379], 99.95th=[ 478], 00:27:18.206 | 99.99th=[ 1205] 00:27:18.206 bw ( KiB/s): min= 2920, max= 8192, per=100.00%, avg=7440.00, stdev=1075.75, samples=38 00:27:18.206 iops : min= 730, max= 2048, avg=1860.00, stdev=268.94, samples=38 00:27:18.206 lat (usec) : 50=0.01%, 250=60.12%, 500=39.85%, 750=0.02%, 1000=0.01% 00:27:18.206 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:27:18.206 cpu : usr=0.49%, sys=2.01%, ctx=72448, majf=0, minf=2 00:27:18.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:18.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.206 issued rwts: total=36058,36352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:18.206 00:27:18.206 Run status group 0 (all jobs): 00:27:18.206 READ: bw=2404KiB/s (2462kB/s), 2404KiB/s-2404KiB/s (2462kB/s-2462kB/s), io=141MiB (148MB), run=60001-60001msec 00:27:18.206 WRITE: bw=2423KiB/s (2482kB/s), 2423KiB/s-2423KiB/s (2482kB/s-2482kB/s), io=142MiB (149MB), run=60001-60001msec 00:27:18.206 00:27:18.206 Disk stats (read/write): 00:27:18.206 nvme0n1: ios=36089/36176, merge=0/0, ticks=10031/8158, in_queue=18189, util=99.61% 00:27:18.206 00:46:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:18.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:18.206 nvmf hotplug test: fio successful as expected 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.206 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.207 rmmod nvme_tcp 00:27:18.207 rmmod nvme_fabrics 00:27:18.207 rmmod nvme_keyring 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 95131 ']' 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 95131 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 95131 ']' 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 95131 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95131 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:18.207 killing process with pid 95131 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95131' 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 95131 00:27:18.207 00:46:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 95131 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:18.207 00:27:18.207 real 1m6.032s 00:27:18.207 user 4m9.582s 00:27:18.207 sys 0m8.401s 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.207 00:46:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.207 ************************************ 00:27:18.207 END TEST nvmf_initiator_timeout 00:27:18.207 ************************************ 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:18.207 00:46:22 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:27:18.207 00:46:22 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.207 00:46:22 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.207 00:46:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:18.207 00:46:22 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.207 00:46:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.207 ************************************ 00:27:18.207 START TEST nvmf_multicontroller 00:27:18.207 ************************************ 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:18.207 * Looking for test storage... 00:27:18.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:18.207 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:18.208 00:46:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:18.208 Cannot find device "nvmf_tgt_br" 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:18.208 Cannot find device "nvmf_tgt_br2" 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:18.208 Cannot find device "nvmf_tgt_br" 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:18.208 Cannot find device "nvmf_tgt_br2" 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.208 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:18.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:18.466 00:27:18.466 --- 10.0.0.2 ping statistics --- 00:27:18.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.466 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:18.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:27:18.466 00:27:18.466 --- 10.0.0.3 ping statistics --- 00:27:18.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.466 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:27:18.466 00:27:18.466 --- 10.0.0.1 ping statistics --- 00:27:18.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.466 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.466 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=96050 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 96050 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 96050 ']' 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.467 00:46:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.725 [2024-07-12 00:46:23.496151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:18.725 [2024-07-12 00:46:23.496319] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.984 [2024-07-12 00:46:23.671596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:19.243 [2024-07-12 00:46:23.969376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.243 [2024-07-12 00:46:23.969754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.243 [2024-07-12 00:46:23.969784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.243 [2024-07-12 00:46:23.969801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.243 [2024-07-12 00:46:23.969814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.243 [2024-07-12 00:46:23.970152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.243 [2024-07-12 00:46:23.970362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:19.243 [2024-07-12 00:46:23.970363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.809 [2024-07-12 00:46:24.628937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.809 Malloc0 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.809 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.068 [2024-07-12 00:46:24.754123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.068 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 [2024-07-12 00:46:24.762091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 Malloc1 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=96106 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 96106 /var/tmp/bdevperf.sock 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 96106 ']' 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.069 00:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 00:46:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.447 00:46:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:21.447 00:46:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:21.447 00:46:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.447 00:46:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 NVMe0n1 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.447 1 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 2024/07/12 00:46:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:21.447 request: 00:27:21.447 { 00:27:21.447 "method": "bdev_nvme_attach_controller", 00:27:21.447 "params": { 00:27:21.447 "name": "NVMe0", 00:27:21.447 "trtype": "tcp", 00:27:21.447 "traddr": "10.0.0.2", 00:27:21.447 "adrfam": "ipv4", 00:27:21.447 "trsvcid": "4420", 00:27:21.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.447 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:21.447 "hostaddr": "10.0.0.2", 00:27:21.447 "hostsvcid": "60000", 00:27:21.447 "prchk_reftag": false, 00:27:21.447 "prchk_guard": false, 00:27:21.447 "hdgst": false, 00:27:21.447 "ddgst": false 00:27:21.447 } 00:27:21.447 } 00:27:21.447 Got JSON-RPC error response 00:27:21.447 GoRPCClient: error on JSON-RPC call 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.447 2024/07/12 00:46:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:21.447 request: 00:27:21.447 { 00:27:21.447 "method": "bdev_nvme_attach_controller", 00:27:21.447 "params": { 00:27:21.447 "name": "NVMe0", 00:27:21.447 "trtype": "tcp", 00:27:21.447 "traddr": "10.0.0.2", 00:27:21.447 "adrfam": "ipv4", 00:27:21.447 "trsvcid": "4420", 00:27:21.447 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:21.447 "hostaddr": "10.0.0.2", 00:27:21.447 "hostsvcid": "60000", 00:27:21.447 "prchk_reftag": false, 00:27:21.447 "prchk_guard": false, 00:27:21.447 "hdgst": false, 00:27:21.447 "ddgst": false 00:27:21.447 } 00:27:21.447 } 00:27:21.447 Got JSON-RPC error response 00:27:21.447 GoRPCClient: error on JSON-RPC call 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.447 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 2024/07/12 00:46:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:27:21.448 request: 00:27:21.448 { 00:27:21.448 "method": "bdev_nvme_attach_controller", 00:27:21.448 "params": { 00:27:21.448 "name": "NVMe0", 00:27:21.448 "trtype": "tcp", 00:27:21.448 "traddr": "10.0.0.2", 00:27:21.448 "adrfam": "ipv4", 00:27:21.448 "trsvcid": "4420", 00:27:21.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.448 "hostaddr": "10.0.0.2", 00:27:21.448 "hostsvcid": "60000", 00:27:21.448 "prchk_reftag": false, 00:27:21.448 "prchk_guard": false, 00:27:21.448 "hdgst": false, 00:27:21.448 "ddgst": false, 00:27:21.448 "multipath": "disable" 00:27:21.448 } 00:27:21.448 } 00:27:21.448 Got JSON-RPC error response 00:27:21.448 GoRPCClient: error on JSON-RPC call 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 2024/07/12 00:46:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:27:21.448 request: 00:27:21.448 { 00:27:21.448 "method": "bdev_nvme_attach_controller", 00:27:21.448 "params": { 00:27:21.448 "name": "NVMe0", 00:27:21.448 "trtype": "tcp", 00:27:21.448 "traddr": "10.0.0.2", 00:27:21.448 "adrfam": "ipv4", 00:27:21.448 "trsvcid": "4420", 00:27:21.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.448 "hostaddr": "10.0.0.2", 00:27:21.448 "hostsvcid": "60000", 00:27:21.448 "prchk_reftag": false, 00:27:21.448 "prchk_guard": false, 00:27:21.448 "hdgst": false, 00:27:21.448 "ddgst": false, 00:27:21.448 "multipath": "failover" 00:27:21.448 } 00:27:21.448 } 00:27:21.448 Got JSON-RPC error response 00:27:21.448 GoRPCClient: error on JSON-RPC call 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:21.448 00:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:22.867 0 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 96106 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 96106 ']' 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 96106 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96106 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.867 killing process with pid 96106 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96106' 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 96106 00:27:22.867 00:46:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 96106 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:24.244 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:24.244 [2024-07-12 00:46:24.981186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:24.244 [2024-07-12 00:46:24.981406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96106 ] 00:27:24.244 [2024-07-12 00:46:25.149144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.244 [2024-07-12 00:46:25.433479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.244 [2024-07-12 00:46:26.278449] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4781cea9-2442-437b-b03f-adfa6f2d8035 already exists 00:27:24.244 [2024-07-12 00:46:26.278534] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4781cea9-2442-437b-b03f-adfa6f2d8035 alias for bdev NVMe1n1 00:27:24.244 [2024-07-12 00:46:26.278568] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:24.244 Running I/O for 1 seconds... 00:27:24.244 00:27:24.244 Latency(us) 00:27:24.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.244 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:24.244 NVMe0n1 : 1.01 12858.00 50.23 0.00 0.00 9936.16 5570.56 17873.45 00:27:24.244 =================================================================================================================== 00:27:24.244 Total : 12858.00 50.23 0.00 0.00 9936.16 5570.56 17873.45 00:27:24.244 Received shutdown signal, test time was about 1.000000 seconds 00:27:24.244 00:27:24.244 Latency(us) 00:27:24.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.244 =================================================================================================================== 00:27:24.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.244 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.244 rmmod nvme_tcp 00:27:24.244 rmmod nvme_fabrics 00:27:24.244 rmmod nvme_keyring 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 96050 ']' 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 96050 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 96050 ']' 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 96050 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96050 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:24.244 killing process with pid 96050 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96050' 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 96050 00:27:24.244 00:46:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 96050 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:26.148 00:27:26.148 real 0m7.770s 00:27:26.148 user 0m23.351s 00:27:26.148 sys 0m1.478s 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.148 ************************************ 00:27:26.148 END TEST nvmf_multicontroller 00:27:26.148 ************************************ 00:27:26.148 00:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:26.148 00:46:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:26.148 00:46:30 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:26.148 00:46:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:26.148 00:46:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.148 00:46:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.148 ************************************ 00:27:26.148 START TEST nvmf_aer 00:27:26.148 ************************************ 00:27:26.148 00:46:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:26.148 * Looking for test storage... 00:27:26.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:26.148 00:46:30 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:26.149 Cannot find device "nvmf_tgt_br" 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:26.149 Cannot find device "nvmf_tgt_br2" 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:26.149 Cannot find device "nvmf_tgt_br" 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:26.149 Cannot find device "nvmf_tgt_br2" 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:26.149 00:46:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:26.149 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:26.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:27:26.408 00:27:26.408 --- 10.0.0.2 ping statistics --- 00:27:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.408 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:26.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:26.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:27:26.408 00:27:26.408 --- 10.0.0.3 ping statistics --- 00:27:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.408 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:26.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:26.408 00:27:26.408 --- 10.0.0.1 ping statistics --- 00:27:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.408 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=96376 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 96376 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 96376 ']' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.408 00:46:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:26.408 [2024-07-12 00:46:31.288043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:26.408 [2024-07-12 00:46:31.288346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.668 [2024-07-12 00:46:31.481527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.926 [2024-07-12 00:46:31.759110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.926 [2024-07-12 00:46:31.759201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.926 [2024-07-12 00:46:31.759219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.926 [2024-07-12 00:46:31.759245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.926 [2024-07-12 00:46:31.759256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.926 [2024-07-12 00:46:31.759559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.926 [2024-07-12 00:46:31.759943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.926 [2024-07-12 00:46:31.760312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.926 [2024-07-12 00:46:31.760295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 [2024-07-12 00:46:32.273647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 Malloc0 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 [2024-07-12 00:46:32.407164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:27.494 [ 00:27:27.494 { 00:27:27.494 "allow_any_host": true, 00:27:27.494 "hosts": [], 00:27:27.494 "listen_addresses": [], 00:27:27.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:27.494 "subtype": "Discovery" 00:27:27.494 }, 00:27:27.494 { 00:27:27.494 "allow_any_host": true, 00:27:27.494 "hosts": [], 00:27:27.494 "listen_addresses": [ 00:27:27.494 { 00:27:27.494 "adrfam": "IPv4", 00:27:27.494 "traddr": "10.0.0.2", 00:27:27.494 "trsvcid": "4420", 00:27:27.494 "trtype": "TCP" 00:27:27.494 } 00:27:27.494 ], 00:27:27.494 "max_cntlid": 65519, 00:27:27.494 "max_namespaces": 2, 00:27:27.494 "min_cntlid": 1, 00:27:27.494 "model_number": "SPDK bdev Controller", 00:27:27.494 "namespaces": [ 00:27:27.494 { 00:27:27.494 "bdev_name": "Malloc0", 00:27:27.494 "name": "Malloc0", 00:27:27.494 "nguid": "B67CC672763A4370AE0212DAFB6058B0", 00:27:27.494 "nsid": 1, 00:27:27.494 "uuid": "b67cc672-763a-4370-ae02-12dafb6058b0" 00:27:27.494 } 00:27:27.494 ], 00:27:27.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.494 "serial_number": "SPDK00000000000001", 00:27:27.494 "subtype": "NVMe" 00:27:27.494 } 00:27:27.494 ] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:27.494 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=96436 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:27.753 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.011 Malloc1 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.011 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.011 [ 00:27:28.011 { 00:27:28.011 "allow_any_host": true, 00:27:28.011 "hosts": [], 00:27:28.011 "listen_addresses": [], 00:27:28.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:28.011 "subtype": "Discovery" 00:27:28.011 }, 00:27:28.011 { 00:27:28.011 "allow_any_host": true, 00:27:28.011 "hosts": [], 00:27:28.011 "listen_addresses": [ 00:27:28.011 { 00:27:28.011 "adrfam": "IPv4", 00:27:28.011 "traddr": "10.0.0.2", 00:27:28.011 "trsvcid": "4420", 00:27:28.011 "trtype": "TCP" 00:27:28.011 } 00:27:28.268 ], 00:27:28.268 "max_cntlid": 65519, 00:27:28.268 "max_namespaces": 2, 00:27:28.268 "min_cntlid": 1, 00:27:28.268 "model_number": "SPDK bdev Controller", 00:27:28.268 "namespaces": [ 00:27:28.268 { 00:27:28.268 "bdev_name": "Malloc0", 00:27:28.268 "name": "Malloc0", 00:27:28.268 "nguid": "B67CC672763A4370AE0212DAFB6058B0", 00:27:28.268 "nsid": 1, 00:27:28.268 "uuid": "b67cc672-763a-4370-ae02-12dafb6058b0" 00:27:28.268 }, 00:27:28.268 { 00:27:28.268 "bdev_name": "Malloc1", 00:27:28.268 "name": "Malloc1", 00:27:28.268 "nguid": "A4D565C844614B319702B04E85AA7CFE", 00:27:28.268 "nsid": 2, 00:27:28.268 "uuid": "a4d565c8-4461-4b31-9702-b04e85aa7cfe" 00:27:28.268 } 00:27:28.268 ], 00:27:28.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.268 "serial_number": "SPDK00000000000001", 00:27:28.268 "subtype": "NVMe" 00:27:28.268 } 00:27:28.268 ] 00:27:28.268 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.268 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 96436 00:27:28.268 Asynchronous Event Request test 00:27:28.268 Attaching to 10.0.0.2 00:27:28.268 Attached to 10.0.0.2 00:27:28.268 Registering asynchronous event callbacks... 00:27:28.268 Starting namespace attribute notice tests for all controllers... 00:27:28.268 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:28.268 aer_cb - Changed Namespace 00:27:28.268 Cleaning up... 00:27:28.268 00:46:32 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:28.268 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.268 00:46:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.268 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.268 00:46:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:28.268 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.268 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.524 rmmod nvme_tcp 00:27:28.524 rmmod nvme_fabrics 00:27:28.524 rmmod nvme_keyring 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 96376 ']' 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 96376 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 96376 ']' 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 96376 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:28.524 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96376 00:27:28.782 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:28.782 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:28.782 killing process with pid 96376 00:27:28.782 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96376' 00:27:28.782 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 96376 00:27:28.782 00:46:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 96376 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:30.156 00:27:30.156 real 0m4.075s 00:27:30.156 user 0m10.943s 00:27:30.156 sys 0m1.006s 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.156 00:46:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.156 ************************************ 00:27:30.156 END TEST nvmf_aer 00:27:30.156 ************************************ 00:27:30.156 00:46:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:30.156 00:46:34 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:30.156 00:46:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:30.156 00:46:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.156 00:46:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.156 ************************************ 00:27:30.156 START TEST nvmf_async_init 00:27:30.156 ************************************ 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:30.156 * Looking for test storage... 00:27:30.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:30.156 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fe024ad57c6c46548f420da4d82be1af 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:30.157 Cannot find device "nvmf_tgt_br" 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:30.157 Cannot find device "nvmf_tgt_br2" 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:30.157 Cannot find device "nvmf_tgt_br" 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:27:30.157 00:46:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:30.157 Cannot find device "nvmf_tgt_br2" 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:30.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:30.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:30.157 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:30.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:27:30.416 00:27:30.416 --- 10.0.0.2 ping statistics --- 00:27:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.416 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:30.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:30.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:27:30.416 00:27:30.416 --- 10.0.0.3 ping statistics --- 00:27:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.416 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:30.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:27:30.416 00:27:30.416 --- 10.0.0.1 ping statistics --- 00:27:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.416 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=96628 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 96628 00:27:30.416 00:46:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 96628 ']' 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.417 00:46:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.675 [2024-07-12 00:46:35.436773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:30.675 [2024-07-12 00:46:35.436972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.934 [2024-07-12 00:46:35.616762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.193 [2024-07-12 00:46:35.907420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.193 [2024-07-12 00:46:35.907497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.193 [2024-07-12 00:46:35.907519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.193 [2024-07-12 00:46:35.907539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.193 [2024-07-12 00:46:35.907553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.193 [2024-07-12 00:46:35.907603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.452 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.452 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:31.452 00:46:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.452 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.452 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 [2024-07-12 00:46:36.436278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 null0 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fe024ad57c6c46548f420da4d82be1af 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.711 [2024-07-12 00:46:36.476636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.711 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.970 nvme0n1 00:27:31.970 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.970 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.970 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.970 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.970 [ 00:27:31.970 { 00:27:31.970 "aliases": [ 00:27:31.970 "fe024ad5-7c6c-4654-8f42-0da4d82be1af" 00:27:31.970 ], 00:27:31.970 "assigned_rate_limits": { 00:27:31.970 "r_mbytes_per_sec": 0, 00:27:31.970 "rw_ios_per_sec": 0, 00:27:31.970 "rw_mbytes_per_sec": 0, 00:27:31.970 "w_mbytes_per_sec": 0 00:27:31.970 }, 00:27:31.970 "block_size": 512, 00:27:31.970 "claimed": false, 00:27:31.970 "driver_specific": { 00:27:31.970 "mp_policy": "active_passive", 00:27:31.970 "nvme": [ 00:27:31.970 { 00:27:31.970 "ctrlr_data": { 00:27:31.971 "ana_reporting": false, 00:27:31.971 "cntlid": 1, 00:27:31.971 "firmware_revision": "24.09", 00:27:31.971 "model_number": "SPDK bdev Controller", 00:27:31.971 "multi_ctrlr": true, 00:27:31.971 "oacs": { 00:27:31.971 "firmware": 0, 00:27:31.971 "format": 0, 00:27:31.971 "ns_manage": 0, 00:27:31.971 "security": 0 00:27:31.971 }, 00:27:31.971 "serial_number": "00000000000000000000", 00:27:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.971 "vendor_id": "0x8086" 00:27:31.971 }, 00:27:31.971 "ns_data": { 00:27:31.971 "can_share": true, 00:27:31.971 "id": 1 00:27:31.971 }, 00:27:31.971 "trid": { 00:27:31.971 "adrfam": "IPv4", 00:27:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.971 "traddr": "10.0.0.2", 00:27:31.971 "trsvcid": "4420", 00:27:31.971 "trtype": "TCP" 00:27:31.971 }, 00:27:31.971 "vs": { 00:27:31.971 "nvme_version": "1.3" 00:27:31.971 } 00:27:31.971 } 00:27:31.971 ] 00:27:31.971 }, 00:27:31.971 "memory_domains": [ 00:27:31.971 { 00:27:31.971 "dma_device_id": "system", 00:27:31.971 "dma_device_type": 1 00:27:31.971 } 00:27:31.971 ], 00:27:31.971 "name": "nvme0n1", 00:27:31.971 "num_blocks": 2097152, 00:27:31.971 "product_name": "NVMe disk", 00:27:31.971 "supported_io_types": { 00:27:31.971 "abort": true, 00:27:31.971 "compare": true, 00:27:31.971 "compare_and_write": true, 00:27:31.971 "copy": true, 00:27:31.971 "flush": true, 00:27:31.971 "get_zone_info": false, 00:27:31.971 "nvme_admin": true, 00:27:31.971 "nvme_io": true, 00:27:31.971 "nvme_io_md": false, 00:27:31.971 "nvme_iov_md": false, 00:27:31.971 "read": true, 00:27:31.971 "reset": true, 00:27:31.971 "seek_data": false, 00:27:31.971 "seek_hole": false, 00:27:31.971 "unmap": false, 00:27:31.971 "write": true, 00:27:31.971 "write_zeroes": true, 00:27:31.971 "zcopy": false, 00:27:31.971 "zone_append": false, 00:27:31.971 "zone_management": false 00:27:31.971 }, 00:27:31.971 "uuid": "fe024ad5-7c6c-4654-8f42-0da4d82be1af", 00:27:31.971 "zoned": false 00:27:31.971 } 00:27:31.971 ] 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.971 [2024-07-12 00:46:36.744916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.971 [2024-07-12 00:46:36.745088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:31.971 [2024-07-12 00:46:36.877729] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.971 [ 00:27:31.971 { 00:27:31.971 "aliases": [ 00:27:31.971 "fe024ad5-7c6c-4654-8f42-0da4d82be1af" 00:27:31.971 ], 00:27:31.971 "assigned_rate_limits": { 00:27:31.971 "r_mbytes_per_sec": 0, 00:27:31.971 "rw_ios_per_sec": 0, 00:27:31.971 "rw_mbytes_per_sec": 0, 00:27:31.971 "w_mbytes_per_sec": 0 00:27:31.971 }, 00:27:31.971 "block_size": 512, 00:27:31.971 "claimed": false, 00:27:31.971 "driver_specific": { 00:27:31.971 "mp_policy": "active_passive", 00:27:31.971 "nvme": [ 00:27:31.971 { 00:27:31.971 "ctrlr_data": { 00:27:31.971 "ana_reporting": false, 00:27:31.971 "cntlid": 2, 00:27:31.971 "firmware_revision": "24.09", 00:27:31.971 "model_number": "SPDK bdev Controller", 00:27:31.971 "multi_ctrlr": true, 00:27:31.971 "oacs": { 00:27:31.971 "firmware": 0, 00:27:31.971 "format": 0, 00:27:31.971 "ns_manage": 0, 00:27:31.971 "security": 0 00:27:31.971 }, 00:27:31.971 "serial_number": "00000000000000000000", 00:27:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.971 "vendor_id": "0x8086" 00:27:31.971 }, 00:27:31.971 "ns_data": { 00:27:31.971 "can_share": true, 00:27:31.971 "id": 1 00:27:31.971 }, 00:27:31.971 "trid": { 00:27:31.971 "adrfam": "IPv4", 00:27:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.971 "traddr": "10.0.0.2", 00:27:31.971 "trsvcid": "4420", 00:27:31.971 "trtype": "TCP" 00:27:31.971 }, 00:27:31.971 "vs": { 00:27:31.971 "nvme_version": "1.3" 00:27:31.971 } 00:27:31.971 } 00:27:31.971 ] 00:27:31.971 }, 00:27:31.971 "memory_domains": [ 00:27:31.971 { 00:27:31.971 "dma_device_id": "system", 00:27:31.971 "dma_device_type": 1 00:27:31.971 } 00:27:31.971 ], 00:27:31.971 "name": "nvme0n1", 00:27:31.971 "num_blocks": 2097152, 00:27:31.971 "product_name": "NVMe disk", 00:27:31.971 "supported_io_types": { 00:27:31.971 "abort": true, 00:27:31.971 "compare": true, 00:27:31.971 "compare_and_write": true, 00:27:31.971 "copy": true, 00:27:31.971 "flush": true, 00:27:31.971 "get_zone_info": false, 00:27:31.971 "nvme_admin": true, 00:27:31.971 "nvme_io": true, 00:27:31.971 "nvme_io_md": false, 00:27:31.971 "nvme_iov_md": false, 00:27:31.971 "read": true, 00:27:31.971 "reset": true, 00:27:31.971 "seek_data": false, 00:27:31.971 "seek_hole": false, 00:27:31.971 "unmap": false, 00:27:31.971 "write": true, 00:27:31.971 "write_zeroes": true, 00:27:31.971 "zcopy": false, 00:27:31.971 "zone_append": false, 00:27:31.971 "zone_management": false 00:27:31.971 }, 00:27:31.971 "uuid": "fe024ad5-7c6c-4654-8f42-0da4d82be1af", 00:27:31.971 "zoned": false 00:27:31.971 } 00:27:31.971 ] 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.971 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DFS3fdyhju 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DFS3fdyhju 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 [2024-07-12 00:46:36.941226] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.231 [2024-07-12 00:46:36.941571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DFS3fdyhju 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 [2024-07-12 00:46:36.949261] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DFS3fdyhju 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 [2024-07-12 00:46:36.957215] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:32.231 [2024-07-12 00:46:36.957344] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:32.231 nvme0n1 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 [ 00:27:32.231 { 00:27:32.231 "aliases": [ 00:27:32.231 "fe024ad5-7c6c-4654-8f42-0da4d82be1af" 00:27:32.231 ], 00:27:32.231 "assigned_rate_limits": { 00:27:32.231 "r_mbytes_per_sec": 0, 00:27:32.231 "rw_ios_per_sec": 0, 00:27:32.231 "rw_mbytes_per_sec": 0, 00:27:32.231 "w_mbytes_per_sec": 0 00:27:32.231 }, 00:27:32.231 "block_size": 512, 00:27:32.231 "claimed": false, 00:27:32.231 "driver_specific": { 00:27:32.231 "mp_policy": "active_passive", 00:27:32.231 "nvme": [ 00:27:32.231 { 00:27:32.231 "ctrlr_data": { 00:27:32.231 "ana_reporting": false, 00:27:32.231 "cntlid": 3, 00:27:32.231 "firmware_revision": "24.09", 00:27:32.231 "model_number": "SPDK bdev Controller", 00:27:32.231 "multi_ctrlr": true, 00:27:32.231 "oacs": { 00:27:32.231 "firmware": 0, 00:27:32.231 "format": 0, 00:27:32.231 "ns_manage": 0, 00:27:32.231 "security": 0 00:27:32.231 }, 00:27:32.231 "serial_number": "00000000000000000000", 00:27:32.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.231 "vendor_id": "0x8086" 00:27:32.231 }, 00:27:32.231 "ns_data": { 00:27:32.231 "can_share": true, 00:27:32.231 "id": 1 00:27:32.231 }, 00:27:32.231 "trid": { 00:27:32.231 "adrfam": "IPv4", 00:27:32.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.231 "traddr": "10.0.0.2", 00:27:32.231 "trsvcid": "4421", 00:27:32.231 "trtype": "TCP" 00:27:32.231 }, 00:27:32.231 "vs": { 00:27:32.231 "nvme_version": "1.3" 00:27:32.231 } 00:27:32.231 } 00:27:32.231 ] 00:27:32.231 }, 00:27:32.231 "memory_domains": [ 00:27:32.231 { 00:27:32.231 "dma_device_id": "system", 00:27:32.231 "dma_device_type": 1 00:27:32.231 } 00:27:32.231 ], 00:27:32.231 "name": "nvme0n1", 00:27:32.231 "num_blocks": 2097152, 00:27:32.231 "product_name": "NVMe disk", 00:27:32.231 "supported_io_types": { 00:27:32.231 "abort": true, 00:27:32.231 "compare": true, 00:27:32.231 "compare_and_write": true, 00:27:32.231 "copy": true, 00:27:32.231 "flush": true, 00:27:32.231 "get_zone_info": false, 00:27:32.231 "nvme_admin": true, 00:27:32.231 "nvme_io": true, 00:27:32.231 "nvme_io_md": false, 00:27:32.231 "nvme_iov_md": false, 00:27:32.231 "read": true, 00:27:32.231 "reset": true, 00:27:32.231 "seek_data": false, 00:27:32.231 "seek_hole": false, 00:27:32.231 "unmap": false, 00:27:32.231 "write": true, 00:27:32.231 "write_zeroes": true, 00:27:32.231 "zcopy": false, 00:27:32.231 "zone_append": false, 00:27:32.231 "zone_management": false 00:27:32.231 }, 00:27:32.231 "uuid": "fe024ad5-7c6c-4654-8f42-0da4d82be1af", 00:27:32.231 "zoned": false 00:27:32.231 } 00:27:32.231 ] 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DFS3fdyhju 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:32.231 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.232 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:32.232 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.232 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.232 rmmod nvme_tcp 00:27:32.232 rmmod nvme_fabrics 00:27:32.491 rmmod nvme_keyring 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 96628 ']' 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 96628 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 96628 ']' 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 96628 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96628 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96628' 00:27:32.491 killing process with pid 96628 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 96628 00:27:32.491 [2024-07-12 00:46:37.223268] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:32.491 [2024-07-12 00:46:37.223328] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:32.491 00:46:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 96628 00:27:33.868 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.868 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:33.869 ************************************ 00:27:33.869 END TEST nvmf_async_init 00:27:33.869 ************************************ 00:27:33.869 00:27:33.869 real 0m3.697s 00:27:33.869 user 0m3.362s 00:27:33.869 sys 0m0.795s 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.869 00:46:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:33.869 00:46:38 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.869 ************************************ 00:27:33.869 START TEST dma 00:27:33.869 ************************************ 00:27:33.869 00:46:38 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:33.869 * Looking for test storage... 00:27:33.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:33.869 00:46:38 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:33.869 00:46:38 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.869 00:46:38 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.869 00:46:38 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.869 00:46:38 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.869 00:46:38 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.869 00:46:38 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.869 00:46:38 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:33.869 00:46:38 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.869 00:46:38 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.869 00:46:38 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:33.869 00:46:38 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:33.869 ************************************ 00:27:33.869 END TEST dma 00:27:33.869 ************************************ 00:27:33.869 00:27:33.869 real 0m0.100s 00:27:33.869 user 0m0.053s 00:27:33.869 sys 0m0.054s 00:27:33.869 00:46:38 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.869 00:46:38 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:33.869 00:46:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.869 00:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.869 ************************************ 00:27:33.869 START TEST nvmf_identify 00:27:33.869 ************************************ 00:27:33.869 00:46:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:33.869 * Looking for test storage... 00:27:34.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:34.129 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:34.130 Cannot find device "nvmf_tgt_br" 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:34.130 Cannot find device "nvmf_tgt_br2" 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:34.130 Cannot find device "nvmf_tgt_br" 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:34.130 Cannot find device "nvmf_tgt_br2" 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:34.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:34.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:34.130 00:46:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:34.130 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:34.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:27:34.389 00:27:34.389 --- 10.0.0.2 ping statistics --- 00:27:34.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.389 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:34.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:34.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:27:34.389 00:27:34.389 --- 10.0.0.3 ping statistics --- 00:27:34.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.389 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:34.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:27:34.389 00:27:34.389 --- 10.0.0.1 ping statistics --- 00:27:34.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.389 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.389 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=96900 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 96900 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 96900 ']' 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.390 00:46:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:34.648 [2024-07-12 00:46:39.328151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:34.648 [2024-07-12 00:46:39.328810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.648 [2024-07-12 00:46:39.503380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.907 [2024-07-12 00:46:39.758784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.907 [2024-07-12 00:46:39.758865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.907 [2024-07-12 00:46:39.758898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.907 [2024-07-12 00:46:39.758912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.907 [2024-07-12 00:46:39.758924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.907 [2024-07-12 00:46:39.759158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.907 [2024-07-12 00:46:39.759292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.907 [2024-07-12 00:46:39.760072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.907 [2024-07-12 00:46:39.760110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 [2024-07-12 00:46:40.265567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 Malloc0 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.474 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.732 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.733 [2024-07-12 00:46:40.431031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:35.733 [ 00:27:35.733 { 00:27:35.733 "allow_any_host": true, 00:27:35.733 "hosts": [], 00:27:35.733 "listen_addresses": [ 00:27:35.733 { 00:27:35.733 "adrfam": "IPv4", 00:27:35.733 "traddr": "10.0.0.2", 00:27:35.733 "trsvcid": "4420", 00:27:35.733 "trtype": "TCP" 00:27:35.733 } 00:27:35.733 ], 00:27:35.733 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:35.733 "subtype": "Discovery" 00:27:35.733 }, 00:27:35.733 { 00:27:35.733 "allow_any_host": true, 00:27:35.733 "hosts": [], 00:27:35.733 "listen_addresses": [ 00:27:35.733 { 00:27:35.733 "adrfam": "IPv4", 00:27:35.733 "traddr": "10.0.0.2", 00:27:35.733 "trsvcid": "4420", 00:27:35.733 "trtype": "TCP" 00:27:35.733 } 00:27:35.733 ], 00:27:35.733 "max_cntlid": 65519, 00:27:35.733 "max_namespaces": 32, 00:27:35.733 "min_cntlid": 1, 00:27:35.733 "model_number": "SPDK bdev Controller", 00:27:35.733 "namespaces": [ 00:27:35.733 { 00:27:35.733 "bdev_name": "Malloc0", 00:27:35.733 "eui64": "ABCDEF0123456789", 00:27:35.733 "name": "Malloc0", 00:27:35.733 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:35.733 "nsid": 1, 00:27:35.733 "uuid": "091d04fc-d09f-415e-af24-5c85134bd2f5" 00:27:35.733 } 00:27:35.733 ], 00:27:35.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.733 "serial_number": "SPDK00000000000001", 00:27:35.733 "subtype": "NVMe" 00:27:35.733 } 00:27:35.733 ] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.733 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:35.733 [2024-07-12 00:46:40.510677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:35.733 [2024-07-12 00:46:40.510785] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96959 ] 00:27:35.996 [2024-07-12 00:46:40.682093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:35.996 [2024-07-12 00:46:40.682247] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:35.996 [2024-07-12 00:46:40.682265] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:35.996 [2024-07-12 00:46:40.682317] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:35.996 [2024-07-12 00:46:40.682335] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:35.996 [2024-07-12 00:46:40.682545] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:35.996 [2024-07-12 00:46:40.682637] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:27:35.996 [2024-07-12 00:46:40.697423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:35.996 [2024-07-12 00:46:40.697465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:35.996 [2024-07-12 00:46:40.697480] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:35.996 [2024-07-12 00:46:40.697489] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:35.996 [2024-07-12 00:46:40.697586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.697603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.697612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.697638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:35.996 [2024-07-12 00:46:40.697683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.705434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.705471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.705482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.705527] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:35.996 [2024-07-12 00:46:40.705547] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:35.996 [2024-07-12 00:46:40.705559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:35.996 [2024-07-12 00:46:40.705581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.705618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.996 [2024-07-12 00:46:40.705664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.705801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.705816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.705826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.705851] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:35.996 [2024-07-12 00:46:40.705867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:35.996 [2024-07-12 00:46:40.705880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.705897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.705920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.996 [2024-07-12 00:46:40.705952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.706057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.706070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.706077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.706096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:35.996 [2024-07-12 00:46:40.706111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.706162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.996 [2024-07-12 00:46:40.706192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.706287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.706306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.706314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.706333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.706405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.996 [2024-07-12 00:46:40.706449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.706542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.706555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.706561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.706579] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:35.996 [2024-07-12 00:46:40.706589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706718] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:35.996 [2024-07-12 00:46:40.706727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.706780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.996 [2024-07-12 00:46:40.706812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.996 [2024-07-12 00:46:40.706904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.996 [2024-07-12 00:46:40.706917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.996 [2024-07-12 00:46:40.706924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.996 [2024-07-12 00:46:40.706942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:35.996 [2024-07-12 00:46:40.706960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.996 [2024-07-12 00:46:40.706978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.996 [2024-07-12 00:46:40.706997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.997 [2024-07-12 00:46:40.707035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.997 [2024-07-12 00:46:40.707117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.997 [2024-07-12 00:46:40.707129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.997 [2024-07-12 00:46:40.707136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.997 [2024-07-12 00:46:40.707157] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:35.997 [2024-07-12 00:46:40.707168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.707182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:35.997 [2024-07-12 00:46:40.707200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.707221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.707246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.997 [2024-07-12 00:46:40.707297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.997 [2024-07-12 00:46:40.707470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:35.997 [2024-07-12 00:46:40.707485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:35.997 [2024-07-12 00:46:40.707496] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707505] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:27:35.997 [2024-07-12 00:46:40.707519] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:35.997 [2024-07-12 00:46:40.707528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707553] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.997 [2024-07-12 00:46:40.707582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.997 [2024-07-12 00:46:40.707588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.997 [2024-07-12 00:46:40.707615] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:35.997 [2024-07-12 00:46:40.707626] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:35.997 [2024-07-12 00:46:40.707638] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:35.997 [2024-07-12 00:46:40.707649] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:35.997 [2024-07-12 00:46:40.707658] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:35.997 [2024-07-12 00:46:40.707667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.707690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.707707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.707740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:35.997 [2024-07-12 00:46:40.707774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.997 [2024-07-12 00:46:40.707876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.997 [2024-07-12 00:46:40.707888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.997 [2024-07-12 00:46:40.707895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.997 [2024-07-12 00:46:40.707917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.707955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.997 [2024-07-12 00:46:40.707971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.707985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.707996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.997 [2024-07-12 00:46:40.708006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.708030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.997 [2024-07-12 00:46:40.708040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.708068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.997 [2024-07-12 00:46:40.708078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.708097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:35.997 [2024-07-12 00:46:40.708110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.708133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.997 [2024-07-12 00:46:40.708180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:35.997 [2024-07-12 00:46:40.708193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:27:35.997 [2024-07-12 00:46:40.708201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:27:35.997 [2024-07-12 00:46:40.708209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.997 [2024-07-12 00:46:40.708218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:35.997 [2024-07-12 00:46:40.708360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.997 [2024-07-12 00:46:40.708377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.997 [2024-07-12 00:46:40.708420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:35.997 [2024-07-12 00:46:40.708446] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:35.997 [2024-07-12 00:46:40.708458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:35.997 [2024-07-12 00:46:40.708483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.708514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.997 [2024-07-12 00:46:40.708547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:35.997 [2024-07-12 00:46:40.708675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:35.997 [2024-07-12 00:46:40.708690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:35.997 [2024-07-12 00:46:40.708702] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708710] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:27:35.997 [2024-07-12 00:46:40.708720] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:35.997 [2024-07-12 00:46:40.708731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708746] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708758] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.997 [2024-07-12 00:46:40.708782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.997 [2024-07-12 00:46:40.708789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:35.997 [2024-07-12 00:46:40.708841] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:35.997 [2024-07-12 00:46:40.708917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:35.997 [2024-07-12 00:46:40.708953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.997 [2024-07-12 00:46:40.708967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.997 [2024-07-12 00:46:40.708975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.708982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:35.998 [2024-07-12 00:46:40.709002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.998 [2024-07-12 00:46:40.709052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:35.998 [2024-07-12 00:46:40.709066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:35.998 [2024-07-12 00:46:40.709390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:35.998 [2024-07-12 00:46:40.713501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:35.998 [2024-07-12 00:46:40.713511] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.713519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:27:35.998 [2024-07-12 00:46:40.713528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:27:35.998 [2024-07-12 00:46:40.713537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.713555] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.713565] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.713575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.998 [2024-07-12 00:46:40.713598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.998 [2024-07-12 00:46:40.713606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.713614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:35.998 [2024-07-12 00:46:40.749507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.998 [2024-07-12 00:46:40.749537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.998 [2024-07-12 00:46:40.749546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:35.998 [2024-07-12 00:46:40.749586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:35.998 [2024-07-12 00:46:40.749613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.998 [2024-07-12 00:46:40.749658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:35.998 [2024-07-12 00:46:40.749834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:35.998 [2024-07-12 00:46:40.749852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:35.998 [2024-07-12 00:46:40.749860] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749868] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:27:35.998 [2024-07-12 00:46:40.749876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:27:35.998 [2024-07-12 00:46:40.749885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749906] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.998 [2024-07-12 00:46:40.749930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.998 [2024-07-12 00:46:40.749937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:35.998 [2024-07-12 00:46:40.749967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.749978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:35.998 [2024-07-12 00:46:40.749993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.998 [2024-07-12 00:46:40.750034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:35.998 [2024-07-12 00:46:40.750166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:35.998 [2024-07-12 00:46:40.750178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:35.998 [2024-07-12 00:46:40.750185] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.750192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:27:35.998 [2024-07-12 00:46:40.750200] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:27:35.998 [2024-07-12 00:46:40.750224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.750241] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.750249] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.793510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.998 [2024-07-12 00:46:40.793568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.998 [2024-07-12 00:46:40.793578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.998 [2024-07-12 00:46:40.793588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:35.998 ===================================================== 00:27:35.998 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:35.998 ===================================================== 00:27:35.998 Controller Capabilities/Features 00:27:35.998 ================================ 00:27:35.998 Vendor ID: 0000 00:27:35.998 Subsystem Vendor ID: 0000 00:27:35.998 Serial Number: .................... 00:27:35.998 Model Number: ........................................ 00:27:35.998 Firmware Version: 24.09 00:27:35.998 Recommended Arb Burst: 0 00:27:35.998 IEEE OUI Identifier: 00 00 00 00:27:35.998 Multi-path I/O 00:27:35.998 May have multiple subsystem ports: No 00:27:35.998 May have multiple controllers: No 00:27:35.998 Associated with SR-IOV VF: No 00:27:35.998 Max Data Transfer Size: 131072 00:27:35.998 Max Number of Namespaces: 0 00:27:35.998 Max Number of I/O Queues: 1024 00:27:35.998 NVMe Specification Version (VS): 1.3 00:27:35.998 NVMe Specification Version (Identify): 1.3 00:27:35.998 Maximum Queue Entries: 128 00:27:35.998 Contiguous Queues Required: Yes 00:27:35.998 Arbitration Mechanisms Supported 00:27:35.998 Weighted Round Robin: Not Supported 00:27:35.998 Vendor Specific: Not Supported 00:27:35.998 Reset Timeout: 15000 ms 00:27:35.998 Doorbell Stride: 4 bytes 00:27:35.998 NVM Subsystem Reset: Not Supported 00:27:35.998 Command Sets Supported 00:27:35.998 NVM Command Set: Supported 00:27:35.998 Boot Partition: Not Supported 00:27:35.998 Memory Page Size Minimum: 4096 bytes 00:27:35.998 Memory Page Size Maximum: 4096 bytes 00:27:35.998 Persistent Memory Region: Not Supported 00:27:35.998 Optional Asynchronous Events Supported 00:27:35.998 Namespace Attribute Notices: Not Supported 00:27:35.998 Firmware Activation Notices: Not Supported 00:27:35.998 ANA Change Notices: Not Supported 00:27:35.998 PLE Aggregate Log Change Notices: Not Supported 00:27:35.998 LBA Status Info Alert Notices: Not Supported 00:27:35.998 EGE Aggregate Log Change Notices: Not Supported 00:27:35.998 Normal NVM Subsystem Shutdown event: Not Supported 00:27:35.998 Zone Descriptor Change Notices: Not Supported 00:27:35.998 Discovery Log Change Notices: Supported 00:27:35.998 Controller Attributes 00:27:35.998 128-bit Host Identifier: Not Supported 00:27:35.998 Non-Operational Permissive Mode: Not Supported 00:27:35.998 NVM Sets: Not Supported 00:27:35.998 Read Recovery Levels: Not Supported 00:27:35.998 Endurance Groups: Not Supported 00:27:35.998 Predictable Latency Mode: Not Supported 00:27:35.998 Traffic Based Keep ALive: Not Supported 00:27:35.998 Namespace Granularity: Not Supported 00:27:35.998 SQ Associations: Not Supported 00:27:35.998 UUID List: Not Supported 00:27:35.998 Multi-Domain Subsystem: Not Supported 00:27:35.998 Fixed Capacity Management: Not Supported 00:27:35.998 Variable Capacity Management: Not Supported 00:27:35.998 Delete Endurance Group: Not Supported 00:27:35.998 Delete NVM Set: Not Supported 00:27:35.998 Extended LBA Formats Supported: Not Supported 00:27:35.998 Flexible Data Placement Supported: Not Supported 00:27:35.998 00:27:35.998 Controller Memory Buffer Support 00:27:35.998 ================================ 00:27:35.998 Supported: No 00:27:35.998 00:27:35.998 Persistent Memory Region Support 00:27:35.998 ================================ 00:27:35.998 Supported: No 00:27:35.998 00:27:35.998 Admin Command Set Attributes 00:27:35.998 ============================ 00:27:35.998 Security Send/Receive: Not Supported 00:27:35.998 Format NVM: Not Supported 00:27:35.998 Firmware Activate/Download: Not Supported 00:27:35.998 Namespace Management: Not Supported 00:27:35.998 Device Self-Test: Not Supported 00:27:35.998 Directives: Not Supported 00:27:35.998 NVMe-MI: Not Supported 00:27:35.998 Virtualization Management: Not Supported 00:27:35.998 Doorbell Buffer Config: Not Supported 00:27:35.998 Get LBA Status Capability: Not Supported 00:27:35.998 Command & Feature Lockdown Capability: Not Supported 00:27:35.998 Abort Command Limit: 1 00:27:35.998 Async Event Request Limit: 4 00:27:35.998 Number of Firmware Slots: N/A 00:27:35.998 Firmware Slot 1 Read-Only: N/A 00:27:35.998 Firmware Activation Without Reset: N/A 00:27:35.998 Multiple Update Detection Support: N/A 00:27:35.998 Firmware Update Granularity: No Information Provided 00:27:35.998 Per-Namespace SMART Log: No 00:27:35.998 Asymmetric Namespace Access Log Page: Not Supported 00:27:35.998 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:35.998 Command Effects Log Page: Not Supported 00:27:35.998 Get Log Page Extended Data: Supported 00:27:35.998 Telemetry Log Pages: Not Supported 00:27:35.998 Persistent Event Log Pages: Not Supported 00:27:35.998 Supported Log Pages Log Page: May Support 00:27:35.998 Commands Supported & Effects Log Page: Not Supported 00:27:35.998 Feature Identifiers & Effects Log Page:May Support 00:27:35.998 NVMe-MI Commands & Effects Log Page: May Support 00:27:35.998 Data Area 4 for Telemetry Log: Not Supported 00:27:35.998 Error Log Page Entries Supported: 128 00:27:35.999 Keep Alive: Not Supported 00:27:35.999 00:27:35.999 NVM Command Set Attributes 00:27:35.999 ========================== 00:27:35.999 Submission Queue Entry Size 00:27:35.999 Max: 1 00:27:35.999 Min: 1 00:27:35.999 Completion Queue Entry Size 00:27:35.999 Max: 1 00:27:35.999 Min: 1 00:27:35.999 Number of Namespaces: 0 00:27:35.999 Compare Command: Not Supported 00:27:35.999 Write Uncorrectable Command: Not Supported 00:27:35.999 Dataset Management Command: Not Supported 00:27:35.999 Write Zeroes Command: Not Supported 00:27:35.999 Set Features Save Field: Not Supported 00:27:35.999 Reservations: Not Supported 00:27:35.999 Timestamp: Not Supported 00:27:35.999 Copy: Not Supported 00:27:35.999 Volatile Write Cache: Not Present 00:27:35.999 Atomic Write Unit (Normal): 1 00:27:35.999 Atomic Write Unit (PFail): 1 00:27:35.999 Atomic Compare & Write Unit: 1 00:27:35.999 Fused Compare & Write: Supported 00:27:35.999 Scatter-Gather List 00:27:35.999 SGL Command Set: Supported 00:27:35.999 SGL Keyed: Supported 00:27:35.999 SGL Bit Bucket Descriptor: Not Supported 00:27:35.999 SGL Metadata Pointer: Not Supported 00:27:35.999 Oversized SGL: Not Supported 00:27:35.999 SGL Metadata Address: Not Supported 00:27:35.999 SGL Offset: Supported 00:27:35.999 Transport SGL Data Block: Not Supported 00:27:35.999 Replay Protected Memory Block: Not Supported 00:27:35.999 00:27:35.999 Firmware Slot Information 00:27:35.999 ========================= 00:27:35.999 Active slot: 0 00:27:35.999 00:27:35.999 00:27:35.999 Error Log 00:27:35.999 ========= 00:27:35.999 00:27:35.999 Active Namespaces 00:27:35.999 ================= 00:27:35.999 Discovery Log Page 00:27:35.999 ================== 00:27:35.999 Generation Counter: 2 00:27:35.999 Number of Records: 2 00:27:35.999 Record Format: 0 00:27:35.999 00:27:35.999 Discovery Log Entry 0 00:27:35.999 ---------------------- 00:27:35.999 Transport Type: 3 (TCP) 00:27:35.999 Address Family: 1 (IPv4) 00:27:35.999 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:35.999 Entry Flags: 00:27:35.999 Duplicate Returned Information: 1 00:27:35.999 Explicit Persistent Connection Support for Discovery: 1 00:27:35.999 Transport Requirements: 00:27:35.999 Secure Channel: Not Required 00:27:35.999 Port ID: 0 (0x0000) 00:27:35.999 Controller ID: 65535 (0xffff) 00:27:35.999 Admin Max SQ Size: 128 00:27:35.999 Transport Service Identifier: 4420 00:27:35.999 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:35.999 Transport Address: 10.0.0.2 00:27:35.999 Discovery Log Entry 1 00:27:35.999 ---------------------- 00:27:35.999 Transport Type: 3 (TCP) 00:27:35.999 Address Family: 1 (IPv4) 00:27:35.999 Subsystem Type: 2 (NVM Subsystem) 00:27:35.999 Entry Flags: 00:27:35.999 Duplicate Returned Information: 0 00:27:35.999 Explicit Persistent Connection Support for Discovery: 0 00:27:35.999 Transport Requirements: 00:27:35.999 Secure Channel: Not Required 00:27:35.999 Port ID: 0 (0x0000) 00:27:35.999 Controller ID: 65535 (0xffff) 00:27:35.999 Admin Max SQ Size: 128 00:27:35.999 Transport Service Identifier: 4420 00:27:35.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:35.999 Transport Address: 10.0.0.2 [2024-07-12 00:46:40.793843] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:35.999 [2024-07-12 00:46:40.793872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.793894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.999 [2024-07-12 00:46:40.793907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.793917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.999 [2024-07-12 00:46:40.793927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.793937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.999 [2024-07-12 00:46:40.793945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.793955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.999 [2024-07-12 00:46:40.793974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.793984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.793993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.999 [2024-07-12 00:46:40.794011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.999 [2024-07-12 00:46:40.794052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.999 [2024-07-12 00:46:40.794177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.999 [2024-07-12 00:46:40.794194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.999 [2024-07-12 00:46:40.794202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.794235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.999 [2024-07-12 00:46:40.794268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.999 [2024-07-12 00:46:40.794308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.999 [2024-07-12 00:46:40.794446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.999 [2024-07-12 00:46:40.794461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.999 [2024-07-12 00:46:40.794468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.794486] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:35.999 [2024-07-12 00:46:40.794503] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:35.999 [2024-07-12 00:46:40.794524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.999 [2024-07-12 00:46:40.794565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.999 [2024-07-12 00:46:40.794598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.999 [2024-07-12 00:46:40.794697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.999 [2024-07-12 00:46:40.794710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.999 [2024-07-12 00:46:40.794717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.794744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.999 [2024-07-12 00:46:40.794775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.999 [2024-07-12 00:46:40.794804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.999 [2024-07-12 00:46:40.794898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.999 [2024-07-12 00:46:40.794911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.999 [2024-07-12 00:46:40.794917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.794943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.794959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:35.999 [2024-07-12 00:46:40.794973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.999 [2024-07-12 00:46:40.795001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:35.999 [2024-07-12 00:46:40.795086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:35.999 [2024-07-12 00:46:40.795098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:35.999 [2024-07-12 00:46:40.795105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.795112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:35.999 [2024-07-12 00:46:40.795131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:35.999 [2024-07-12 00:46:40.795141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.795161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.795189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.795274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.795295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.795303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.795329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.795359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.795388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.795484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.795497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.795504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.795530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.795560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.795590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.795683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.795695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.795702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.795728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.795757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.795785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.795872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.795885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.795891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.795917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.795933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.795947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.795975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.796060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.796083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.796092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.796119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.796154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.796185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.796270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.796295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.796304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.796331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.796361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.796424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.796516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.796528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.796535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.796561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.796606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.796639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.796720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.796732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.000 [2024-07-12 00:46:40.796739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.000 [2024-07-12 00:46:40.796765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.000 [2024-07-12 00:46:40.796781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.000 [2024-07-12 00:46:40.796795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.000 [2024-07-12 00:46:40.796823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.000 [2024-07-12 00:46:40.796912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.000 [2024-07-12 00:46:40.796930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.796937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.796945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.796964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.796973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.796980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.796993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.797022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.797125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.797137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.797144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.797170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.797200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.797228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.797317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.797329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.797335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.797361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.797405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.797438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.797518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.797530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.797537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.797564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.797593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.797622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.797711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.797723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.797730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.797756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.797793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.797822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.797912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.797924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.797931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.797957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.797973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.797986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.798098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.798110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.798116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.798142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.798172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.798278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.798290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.798296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.798322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.798352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.798476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.798500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.798508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.798535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.798565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.798682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.798701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.798708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.798734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.798764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.798875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.798893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.798901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.798927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.798943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.798957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.798985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.799067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.001 [2024-07-12 00:46:40.799084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.001 [2024-07-12 00:46:40.799091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.799099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.001 [2024-07-12 00:46:40.799117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.799127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.001 [2024-07-12 00:46:40.799134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.001 [2024-07-12 00:46:40.799147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.001 [2024-07-12 00:46:40.799176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.001 [2024-07-12 00:46:40.799262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.799283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.799291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.799318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.799369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.799416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.799498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.799510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.799517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.799543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.799573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.799601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.799690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.799701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.799708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.799733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.799763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.799791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.799877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.799895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.799903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.799929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.799945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.799959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.799987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.800076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.800103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.800112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.800138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.800168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.800197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.800278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.800290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.800297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.800324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.800353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.800382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.800498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.800510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.800517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.800544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.800574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.800605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.800687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.800708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.800715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.800741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.800771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.800800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.800881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.800893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.800900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.800926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.800942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.800961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.800990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.801097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.801115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.801122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.801148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.801178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.801207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.801291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.801304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.801311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.801337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.801353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.801366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.805412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.805450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.805464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.805471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.805478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.002 [2024-07-12 00:46:40.805502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.805512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.805519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.002 [2024-07-12 00:46:40.805534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.002 [2024-07-12 00:46:40.805570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.002 [2024-07-12 00:46:40.805667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.002 [2024-07-12 00:46:40.805685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.002 [2024-07-12 00:46:40.805694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.002 [2024-07-12 00:46:40.805701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.003 [2024-07-12 00:46:40.805717] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:27:36.003 00:27:36.003 00:46:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:36.003 [2024-07-12 00:46:40.913312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:36.003 [2024-07-12 00:46:40.913451] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96962 ] 00:27:36.265 [2024-07-12 00:46:41.090094] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:36.265 [2024-07-12 00:46:41.090282] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:36.265 [2024-07-12 00:46:41.090305] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:36.265 [2024-07-12 00:46:41.090343] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:36.265 [2024-07-12 00:46:41.090364] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:36.265 [2024-07-12 00:46:41.090611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:36.265 [2024-07-12 00:46:41.090699] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:27:36.265 [2024-07-12 00:46:41.097422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:36.265 [2024-07-12 00:46:41.097511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:36.265 [2024-07-12 00:46:41.097529] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:36.265 [2024-07-12 00:46:41.097542] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:36.265 [2024-07-12 00:46:41.097675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.097699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.097711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.265 [2024-07-12 00:46:41.097746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:36.265 [2024-07-12 00:46:41.097815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.265 [2024-07-12 00:46:41.105444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.265 [2024-07-12 00:46:41.105494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.265 [2024-07-12 00:46:41.105518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.265 [2024-07-12 00:46:41.105560] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:36.265 [2024-07-12 00:46:41.105588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:36.265 [2024-07-12 00:46:41.105603] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:36.265 [2024-07-12 00:46:41.105628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.265 [2024-07-12 00:46:41.105676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.265 [2024-07-12 00:46:41.105725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.265 [2024-07-12 00:46:41.105832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.265 [2024-07-12 00:46:41.105850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.265 [2024-07-12 00:46:41.105859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.265 [2024-07-12 00:46:41.105899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:36.265 [2024-07-12 00:46:41.105931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:36.265 [2024-07-12 00:46:41.105948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.105969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.265 [2024-07-12 00:46:41.105997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.265 [2024-07-12 00:46:41.106041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.265 [2024-07-12 00:46:41.106135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.265 [2024-07-12 00:46:41.106150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.265 [2024-07-12 00:46:41.106158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.106167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.265 [2024-07-12 00:46:41.106181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:36.265 [2024-07-12 00:46:41.106200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:36.265 [2024-07-12 00:46:41.106222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.106232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.106242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.265 [2024-07-12 00:46:41.106260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.265 [2024-07-12 00:46:41.106302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.265 [2024-07-12 00:46:41.106372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.265 [2024-07-12 00:46:41.106386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.265 [2024-07-12 00:46:41.106413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.106423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.265 [2024-07-12 00:46:41.106437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:36.265 [2024-07-12 00:46:41.106460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.265 [2024-07-12 00:46:41.106476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.106487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.106505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.266 [2024-07-12 00:46:41.106548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.106637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.106651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.106660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.106672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.106685] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:36.266 [2024-07-12 00:46:41.106698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:36.266 [2024-07-12 00:46:41.106719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:36.266 [2024-07-12 00:46:41.106838] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:36.266 [2024-07-12 00:46:41.106850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:36.266 [2024-07-12 00:46:41.106870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.106881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.106891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.106908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.266 [2024-07-12 00:46:41.106952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.107040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.107055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.107062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.107094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:36.266 [2024-07-12 00:46:41.107116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.107161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.266 [2024-07-12 00:46:41.107199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.107278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.107292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.107300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.107321] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:36.266 [2024-07-12 00:46:41.107337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.107354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:36.266 [2024-07-12 00:46:41.107378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.107422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.107455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.266 [2024-07-12 00:46:41.107516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.107682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.266 [2024-07-12 00:46:41.107698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.266 [2024-07-12 00:46:41.107706] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:27:36.266 [2024-07-12 00:46:41.107728] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:36.266 [2024-07-12 00:46:41.107739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107762] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107773] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.107808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.107815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.107848] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:36.266 [2024-07-12 00:46:41.107867] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:36.266 [2024-07-12 00:46:41.107878] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:36.266 [2024-07-12 00:46:41.107888] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:36.266 [2024-07-12 00:46:41.107899] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:36.266 [2024-07-12 00:46:41.107911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.107935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.107960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.107982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:36.266 [2024-07-12 00:46:41.108058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.108145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.108163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.108172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.108198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.266 [2024-07-12 00:46:41.108256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.266 [2024-07-12 00:46:41.108299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.266 [2024-07-12 00:46:41.108347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.266 [2024-07-12 00:46:41.108426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.108450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.108467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.266 [2024-07-12 00:46:41.108494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.266 [2024-07-12 00:46:41.108546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:36.266 [2024-07-12 00:46:41.108562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:27:36.266 [2024-07-12 00:46:41.108572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:27:36.266 [2024-07-12 00:46:41.108586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.266 [2024-07-12 00:46:41.108597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.266 [2024-07-12 00:46:41.108737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.266 [2024-07-12 00:46:41.108753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.266 [2024-07-12 00:46:41.108761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.266 [2024-07-12 00:46:41.108770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.266 [2024-07-12 00:46:41.108783] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:36.266 [2024-07-12 00:46:41.108796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.108813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.108827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:36.266 [2024-07-12 00:46:41.108842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.108852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.108862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.108880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:36.267 [2024-07-12 00:46:41.108918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.267 [2024-07-12 00:46:41.109008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.109026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.109034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.109043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.109163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.109196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.109232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.109243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.109262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.109304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.267 [2024-07-12 00:46:41.113443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.267 [2024-07-12 00:46:41.113477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.267 [2024-07-12 00:46:41.113506] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:27:36.267 [2024-07-12 00:46:41.113527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:36.267 [2024-07-12 00:46:41.113537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113554] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113563] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.113592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.113601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.113669] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:36.267 [2024-07-12 00:46:41.113699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.113741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.113765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.113775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.113800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.113850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.267 [2024-07-12 00:46:41.113985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.267 [2024-07-12 00:46:41.114000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.267 [2024-07-12 00:46:41.114008] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114017] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:27:36.267 [2024-07-12 00:46:41.114027] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:36.267 [2024-07-12 00:46:41.114037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114051] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114060] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.114096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.114104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.114166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.114249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.114290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.267 [2024-07-12 00:46:41.114426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.267 [2024-07-12 00:46:41.114444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.267 [2024-07-12 00:46:41.114452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114461] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:27:36.267 [2024-07-12 00:46:41.114472] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:36.267 [2024-07-12 00:46:41.114481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114502] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114511] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.114539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.114547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.114596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114706] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:36.267 [2024-07-12 00:46:41.114717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:36.267 [2024-07-12 00:46:41.114729] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:36.267 [2024-07-12 00:46:41.114780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.114819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.114837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.114857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.114872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.267 [2024-07-12 00:46:41.114923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.267 [2024-07-12 00:46:41.114944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:36.267 [2024-07-12 00:46:41.115043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.115060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.115069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.115098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.115111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.115119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.115148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.115176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.115211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:36.267 [2024-07-12 00:46:41.115294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.115309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.115317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.115347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:36.267 [2024-07-12 00:46:41.115374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.267 [2024-07-12 00:46:41.115427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:36.267 [2024-07-12 00:46:41.115509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.267 [2024-07-12 00:46:41.115525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.267 [2024-07-12 00:46:41.115533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.267 [2024-07-12 00:46:41.115542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:36.267 [2024-07-12 00:46:41.115563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:36.268 [2024-07-12 00:46:41.115602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.268 [2024-07-12 00:46:41.115638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:36.268 [2024-07-12 00:46:41.115719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.268 [2024-07-12 00:46:41.115741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.268 [2024-07-12 00:46:41.115750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:36.268 [2024-07-12 00:46:41.115798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:27:36.268 [2024-07-12 00:46:41.115830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.268 [2024-07-12 00:46:41.115848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:27:36.268 [2024-07-12 00:46:41.115874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.268 [2024-07-12 00:46:41.115892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:27:36.268 [2024-07-12 00:46:41.115927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.268 [2024-07-12 00:46:41.115952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.115963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:27:36.268 [2024-07-12 00:46:41.115979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.268 [2024-07-12 00:46:41.116019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:27:36.268 [2024-07-12 00:46:41.116034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:27:36.268 [2024-07-12 00:46:41.116045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:27:36.268 [2024-07-12 00:46:41.116055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:27:36.268 [2024-07-12 00:46:41.116256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.268 [2024-07-12 00:46:41.116285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.268 [2024-07-12 00:46:41.116296] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116306] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:27:36.268 [2024-07-12 00:46:41.116317] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:27:36.268 [2024-07-12 00:46:41.116335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116375] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116422] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.268 [2024-07-12 00:46:41.116467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.268 [2024-07-12 00:46:41.116476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116484] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:27:36.268 [2024-07-12 00:46:41.116494] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:27:36.268 [2024-07-12 00:46:41.116503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116517] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116529] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.268 [2024-07-12 00:46:41.116552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.268 [2024-07-12 00:46:41.116559] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116567] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:27:36.268 [2024-07-12 00:46:41.116577] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:27:36.268 [2024-07-12 00:46:41.116586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116602] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:36.268 [2024-07-12 00:46:41.116636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:36.268 [2024-07-12 00:46:41.116644] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116652] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:27:36.268 [2024-07-12 00:46:41.116662] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:27:36.268 [2024-07-12 00:46:41.116672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116685] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116693] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.268 [2024-07-12 00:46:41.116716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.268 [2024-07-12 00:46:41.116723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116733] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:27:36.268 [2024-07-12 00:46:41.116777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.268 [2024-07-12 00:46:41.116791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.268 [2024-07-12 00:46:41.116799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:27:36.268 [2024-07-12 00:46:41.116833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.268 [2024-07-12 00:46:41.116846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.268 [2024-07-12 00:46:41.116853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:27:36.268 [2024-07-12 00:46:41.116878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.268 [2024-07-12 00:46:41.116890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.268 [2024-07-12 00:46:41.116897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.268 [2024-07-12 00:46:41.116922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:27:36.268 ===================================================== 00:27:36.268 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.268 ===================================================== 00:27:36.268 Controller Capabilities/Features 00:27:36.268 ================================ 00:27:36.268 Vendor ID: 8086 00:27:36.268 Subsystem Vendor ID: 8086 00:27:36.268 Serial Number: SPDK00000000000001 00:27:36.268 Model Number: SPDK bdev Controller 00:27:36.268 Firmware Version: 24.09 00:27:36.268 Recommended Arb Burst: 6 00:27:36.268 IEEE OUI Identifier: e4 d2 5c 00:27:36.268 Multi-path I/O 00:27:36.268 May have multiple subsystem ports: Yes 00:27:36.268 May have multiple controllers: Yes 00:27:36.268 Associated with SR-IOV VF: No 00:27:36.268 Max Data Transfer Size: 131072 00:27:36.268 Max Number of Namespaces: 32 00:27:36.268 Max Number of I/O Queues: 127 00:27:36.268 NVMe Specification Version (VS): 1.3 00:27:36.268 NVMe Specification Version (Identify): 1.3 00:27:36.268 Maximum Queue Entries: 128 00:27:36.268 Contiguous Queues Required: Yes 00:27:36.268 Arbitration Mechanisms Supported 00:27:36.268 Weighted Round Robin: Not Supported 00:27:36.268 Vendor Specific: Not Supported 00:27:36.268 Reset Timeout: 15000 ms 00:27:36.268 Doorbell Stride: 4 bytes 00:27:36.268 NVM Subsystem Reset: Not Supported 00:27:36.268 Command Sets Supported 00:27:36.268 NVM Command Set: Supported 00:27:36.268 Boot Partition: Not Supported 00:27:36.268 Memory Page Size Minimum: 4096 bytes 00:27:36.268 Memory Page Size Maximum: 4096 bytes 00:27:36.268 Persistent Memory Region: Not Supported 00:27:36.268 Optional Asynchronous Events Supported 00:27:36.268 Namespace Attribute Notices: Supported 00:27:36.268 Firmware Activation Notices: Not Supported 00:27:36.268 ANA Change Notices: Not Supported 00:27:36.268 PLE Aggregate Log Change Notices: Not Supported 00:27:36.268 LBA Status Info Alert Notices: Not Supported 00:27:36.268 EGE Aggregate Log Change Notices: Not Supported 00:27:36.268 Normal NVM Subsystem Shutdown event: Not Supported 00:27:36.268 Zone Descriptor Change Notices: Not Supported 00:27:36.268 Discovery Log Change Notices: Not Supported 00:27:36.268 Controller Attributes 00:27:36.268 128-bit Host Identifier: Supported 00:27:36.268 Non-Operational Permissive Mode: Not Supported 00:27:36.268 NVM Sets: Not Supported 00:27:36.268 Read Recovery Levels: Not Supported 00:27:36.268 Endurance Groups: Not Supported 00:27:36.268 Predictable Latency Mode: Not Supported 00:27:36.268 Traffic Based Keep ALive: Not Supported 00:27:36.268 Namespace Granularity: Not Supported 00:27:36.268 SQ Associations: Not Supported 00:27:36.268 UUID List: Not Supported 00:27:36.268 Multi-Domain Subsystem: Not Supported 00:27:36.268 Fixed Capacity Management: Not Supported 00:27:36.268 Variable Capacity Management: Not Supported 00:27:36.268 Delete Endurance Group: Not Supported 00:27:36.268 Delete NVM Set: Not Supported 00:27:36.268 Extended LBA Formats Supported: Not Supported 00:27:36.268 Flexible Data Placement Supported: Not Supported 00:27:36.268 00:27:36.268 Controller Memory Buffer Support 00:27:36.268 ================================ 00:27:36.268 Supported: No 00:27:36.268 00:27:36.268 Persistent Memory Region Support 00:27:36.268 ================================ 00:27:36.268 Supported: No 00:27:36.268 00:27:36.269 Admin Command Set Attributes 00:27:36.269 ============================ 00:27:36.269 Security Send/Receive: Not Supported 00:27:36.269 Format NVM: Not Supported 00:27:36.269 Firmware Activate/Download: Not Supported 00:27:36.269 Namespace Management: Not Supported 00:27:36.269 Device Self-Test: Not Supported 00:27:36.269 Directives: Not Supported 00:27:36.269 NVMe-MI: Not Supported 00:27:36.269 Virtualization Management: Not Supported 00:27:36.269 Doorbell Buffer Config: Not Supported 00:27:36.269 Get LBA Status Capability: Not Supported 00:27:36.269 Command & Feature Lockdown Capability: Not Supported 00:27:36.269 Abort Command Limit: 4 00:27:36.269 Async Event Request Limit: 4 00:27:36.269 Number of Firmware Slots: N/A 00:27:36.269 Firmware Slot 1 Read-Only: N/A 00:27:36.269 Firmware Activation Without Reset: N/A 00:27:36.269 Multiple Update Detection Support: N/A 00:27:36.269 Firmware Update Granularity: No Information Provided 00:27:36.269 Per-Namespace SMART Log: No 00:27:36.269 Asymmetric Namespace Access Log Page: Not Supported 00:27:36.269 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:36.269 Command Effects Log Page: Supported 00:27:36.269 Get Log Page Extended Data: Supported 00:27:36.269 Telemetry Log Pages: Not Supported 00:27:36.269 Persistent Event Log Pages: Not Supported 00:27:36.269 Supported Log Pages Log Page: May Support 00:27:36.269 Commands Supported & Effects Log Page: Not Supported 00:27:36.269 Feature Identifiers & Effects Log Page:May Support 00:27:36.269 NVMe-MI Commands & Effects Log Page: May Support 00:27:36.269 Data Area 4 for Telemetry Log: Not Supported 00:27:36.269 Error Log Page Entries Supported: 128 00:27:36.269 Keep Alive: Supported 00:27:36.269 Keep Alive Granularity: 10000 ms 00:27:36.269 00:27:36.269 NVM Command Set Attributes 00:27:36.269 ========================== 00:27:36.269 Submission Queue Entry Size 00:27:36.269 Max: 64 00:27:36.269 Min: 64 00:27:36.269 Completion Queue Entry Size 00:27:36.269 Max: 16 00:27:36.269 Min: 16 00:27:36.269 Number of Namespaces: 32 00:27:36.269 Compare Command: Supported 00:27:36.269 Write Uncorrectable Command: Not Supported 00:27:36.269 Dataset Management Command: Supported 00:27:36.269 Write Zeroes Command: Supported 00:27:36.269 Set Features Save Field: Not Supported 00:27:36.269 Reservations: Supported 00:27:36.269 Timestamp: Not Supported 00:27:36.269 Copy: Supported 00:27:36.269 Volatile Write Cache: Present 00:27:36.269 Atomic Write Unit (Normal): 1 00:27:36.269 Atomic Write Unit (PFail): 1 00:27:36.269 Atomic Compare & Write Unit: 1 00:27:36.269 Fused Compare & Write: Supported 00:27:36.269 Scatter-Gather List 00:27:36.269 SGL Command Set: Supported 00:27:36.269 SGL Keyed: Supported 00:27:36.269 SGL Bit Bucket Descriptor: Not Supported 00:27:36.269 SGL Metadata Pointer: Not Supported 00:27:36.269 Oversized SGL: Not Supported 00:27:36.269 SGL Metadata Address: Not Supported 00:27:36.269 SGL Offset: Supported 00:27:36.269 Transport SGL Data Block: Not Supported 00:27:36.269 Replay Protected Memory Block: Not Supported 00:27:36.269 00:27:36.269 Firmware Slot Information 00:27:36.269 ========================= 00:27:36.269 Active slot: 1 00:27:36.269 Slot 1 Firmware Revision: 24.09 00:27:36.269 00:27:36.269 00:27:36.269 Commands Supported and Effects 00:27:36.269 ============================== 00:27:36.269 Admin Commands 00:27:36.269 -------------- 00:27:36.269 Get Log Page (02h): Supported 00:27:36.269 Identify (06h): Supported 00:27:36.269 Abort (08h): Supported 00:27:36.269 Set Features (09h): Supported 00:27:36.269 Get Features (0Ah): Supported 00:27:36.269 Asynchronous Event Request (0Ch): Supported 00:27:36.269 Keep Alive (18h): Supported 00:27:36.269 I/O Commands 00:27:36.269 ------------ 00:27:36.269 Flush (00h): Supported LBA-Change 00:27:36.269 Write (01h): Supported LBA-Change 00:27:36.269 Read (02h): Supported 00:27:36.269 Compare (05h): Supported 00:27:36.269 Write Zeroes (08h): Supported LBA-Change 00:27:36.269 Dataset Management (09h): Supported LBA-Change 00:27:36.269 Copy (19h): Supported LBA-Change 00:27:36.269 00:27:36.269 Error Log 00:27:36.269 ========= 00:27:36.269 00:27:36.269 Arbitration 00:27:36.269 =========== 00:27:36.269 Arbitration Burst: 1 00:27:36.269 00:27:36.269 Power Management 00:27:36.269 ================ 00:27:36.269 Number of Power States: 1 00:27:36.269 Current Power State: Power State #0 00:27:36.269 Power State #0: 00:27:36.269 Max Power: 0.00 W 00:27:36.269 Non-Operational State: Operational 00:27:36.269 Entry Latency: Not Reported 00:27:36.269 Exit Latency: Not Reported 00:27:36.269 Relative Read Throughput: 0 00:27:36.269 Relative Read Latency: 0 00:27:36.269 Relative Write Throughput: 0 00:27:36.269 Relative Write Latency: 0 00:27:36.269 Idle Power: Not Reported 00:27:36.269 Active Power: Not Reported 00:27:36.269 Non-Operational Permissive Mode: Not Supported 00:27:36.269 00:27:36.269 Health Information 00:27:36.269 ================== 00:27:36.269 Critical Warnings: 00:27:36.269 Available Spare Space: OK 00:27:36.269 Temperature: OK 00:27:36.269 Device Reliability: OK 00:27:36.269 Read Only: No 00:27:36.269 Volatile Memory Backup: OK 00:27:36.269 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:36.269 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:36.269 Available Spare: 0% 00:27:36.269 Available Spare Threshold: 0% 00:27:36.269 Life Percentage Used:[2024-07-12 00:46:41.117170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.269 [2024-07-12 00:46:41.117186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:27:36.269 [2024-07-12 00:46:41.117206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.269 [2024-07-12 00:46:41.117254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:27:36.269 [2024-07-12 00:46:41.117356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.269 [2024-07-12 00:46:41.117383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.269 [2024-07-12 00:46:41.121458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.269 [2024-07-12 00:46:41.121472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:27:36.269 [2024-07-12 00:46:41.121603] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:36.269 [2024-07-12 00:46:41.121640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:27:36.269 [2024-07-12 00:46:41.121660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.269 [2024-07-12 00:46:41.121674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:27:36.269 [2024-07-12 00:46:41.121686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.269 [2024-07-12 00:46:41.121696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:27:36.269 [2024-07-12 00:46:41.121724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.269 [2024-07-12 00:46:41.121739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.269 [2024-07-12 00:46:41.121751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.269 [2024-07-12 00:46:41.121771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.269 [2024-07-12 00:46:41.121782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.269 [2024-07-12 00:46:41.121792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.269 [2024-07-12 00:46:41.121811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.269 [2024-07-12 00:46:41.121861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.269 [2024-07-12 00:46:41.121948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.269 [2024-07-12 00:46:41.121979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.269 [2024-07-12 00:46:41.121990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.122020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.122060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.122106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.122213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.122238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.122248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.122269] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:36.270 [2024-07-12 00:46:41.122285] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:36.270 [2024-07-12 00:46:41.122308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.122353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.122409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.122489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.122504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.122512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.122545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.122586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.122622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.122699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.122717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.122725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.122763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.122799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.122835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.122909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.122935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.122944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.122975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.122994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.123015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.123051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.123128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.123152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.123161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.123198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.123234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.123270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.123349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.123369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.123378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.123424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.123461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.123505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.123576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.123597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.123607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.123637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.123678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.123714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.123805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.123826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.123835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.123870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.123895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.123911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.123947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.124021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.124036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.124048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.124079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.124114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.124148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.124229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.124243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.124251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.124284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.124319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.124353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.124497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.124522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.124531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.124562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.270 [2024-07-12 00:46:41.124597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.270 [2024-07-12 00:46:41.124639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.270 [2024-07-12 00:46:41.124727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.270 [2024-07-12 00:46:41.124747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.270 [2024-07-12 00:46:41.124755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.270 [2024-07-12 00:46:41.124764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.270 [2024-07-12 00:46:41.124786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.124805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.124815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.271 [2024-07-12 00:46:41.124838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.271 [2024-07-12 00:46:41.124886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.271 [2024-07-12 00:46:41.124970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.271 [2024-07-12 00:46:41.125004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.271 [2024-07-12 00:46:41.125013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.271 [2024-07-12 00:46:41.125051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.271 [2024-07-12 00:46:41.125092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.271 [2024-07-12 00:46:41.125132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.271 [2024-07-12 00:46:41.125214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.271 [2024-07-12 00:46:41.125238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.271 [2024-07-12 00:46:41.125247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.271 [2024-07-12 00:46:41.125278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.125297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.271 [2024-07-12 00:46:41.125313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.271 [2024-07-12 00:46:41.125348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.271 [2024-07-12 00:46:41.129431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.271 [2024-07-12 00:46:41.129469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.271 [2024-07-12 00:46:41.129479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.129489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.271 [2024-07-12 00:46:41.129516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.129527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.129536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:27:36.271 [2024-07-12 00:46:41.129554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.271 [2024-07-12 00:46:41.129597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:27:36.271 [2024-07-12 00:46:41.129688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:36.271 [2024-07-12 00:46:41.129703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:36.271 [2024-07-12 00:46:41.129711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:36.271 [2024-07-12 00:46:41.129720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:27:36.271 [2024-07-12 00:46:41.129737] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:36.271 0% 00:27:36.271 Data Units Read: 0 00:27:36.271 Data Units Written: 0 00:27:36.271 Host Read Commands: 0 00:27:36.271 Host Write Commands: 0 00:27:36.271 Controller Busy Time: 0 minutes 00:27:36.271 Power Cycles: 0 00:27:36.271 Power On Hours: 0 hours 00:27:36.271 Unsafe Shutdowns: 0 00:27:36.271 Unrecoverable Media Errors: 0 00:27:36.271 Lifetime Error Log Entries: 0 00:27:36.271 Warning Temperature Time: 0 minutes 00:27:36.271 Critical Temperature Time: 0 minutes 00:27:36.271 00:27:36.271 Number of Queues 00:27:36.271 ================ 00:27:36.271 Number of I/O Submission Queues: 127 00:27:36.271 Number of I/O Completion Queues: 127 00:27:36.271 00:27:36.271 Active Namespaces 00:27:36.271 ================= 00:27:36.271 Namespace ID:1 00:27:36.271 Error Recovery Timeout: Unlimited 00:27:36.271 Command Set Identifier: NVM (00h) 00:27:36.271 Deallocate: Supported 00:27:36.271 Deallocated/Unwritten Error: Not Supported 00:27:36.271 Deallocated Read Value: Unknown 00:27:36.271 Deallocate in Write Zeroes: Not Supported 00:27:36.271 Deallocated Guard Field: 0xFFFF 00:27:36.271 Flush: Supported 00:27:36.271 Reservation: Supported 00:27:36.271 Namespace Sharing Capabilities: Multiple Controllers 00:27:36.271 Size (in LBAs): 131072 (0GiB) 00:27:36.271 Capacity (in LBAs): 131072 (0GiB) 00:27:36.271 Utilization (in LBAs): 131072 (0GiB) 00:27:36.271 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:36.271 EUI64: ABCDEF0123456789 00:27:36.271 UUID: 091d04fc-d09f-415e-af24-5c85134bd2f5 00:27:36.271 Thin Provisioning: Not Supported 00:27:36.271 Per-NS Atomic Units: Yes 00:27:36.271 Atomic Boundary Size (Normal): 0 00:27:36.271 Atomic Boundary Size (PFail): 0 00:27:36.271 Atomic Boundary Offset: 0 00:27:36.271 Maximum Single Source Range Length: 65535 00:27:36.271 Maximum Copy Length: 65535 00:27:36.271 Maximum Source Range Count: 1 00:27:36.271 NGUID/EUI64 Never Reused: No 00:27:36.271 Namespace Write Protected: No 00:27:36.271 Number of LBA Formats: 1 00:27:36.271 Current LBA Format: LBA Format #00 00:27:36.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:36.271 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.529 rmmod nvme_tcp 00:27:36.529 rmmod nvme_fabrics 00:27:36.529 rmmod nvme_keyring 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 96900 ']' 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 96900 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 96900 ']' 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 96900 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96900 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:36.529 killing process with pid 96900 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96900' 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 96900 00:27:36.529 00:46:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 96900 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.904 00:46:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:37.904 00:27:37.904 real 0m4.018s 00:27:37.904 user 0m11.007s 00:27:37.904 sys 0m0.953s 00:27:37.905 00:46:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.905 00:46:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 ************************************ 00:27:37.905 END TEST nvmf_identify 00:27:37.905 ************************************ 00:27:37.905 00:46:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:37.905 00:46:42 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:37.905 00:46:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.905 00:46:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.905 00:46:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 ************************************ 00:27:37.905 START TEST nvmf_perf 00:27:37.905 ************************************ 00:27:37.905 00:46:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:38.163 * Looking for test storage... 00:27:38.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:38.163 00:46:42 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:38.163 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:38.164 Cannot find device "nvmf_tgt_br" 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:38.164 Cannot find device "nvmf_tgt_br2" 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:38.164 Cannot find device "nvmf_tgt_br" 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:38.164 Cannot find device "nvmf_tgt_br2" 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:38.164 00:46:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:38.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:38.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:38.164 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:38.423 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:38.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:27:38.424 00:27:38.424 --- 10.0.0.2 ping statistics --- 00:27:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.424 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:38.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:38.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:27:38.424 00:27:38.424 --- 10.0.0.3 ping statistics --- 00:27:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.424 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:38.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:38.424 00:27:38.424 --- 10.0.0.1 ping statistics --- 00:27:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.424 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=97139 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 97139 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 97139 ']' 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:38.424 00:46:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.682 [2024-07-12 00:46:43.403384] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:38.682 [2024-07-12 00:46:43.403631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.682 [2024-07-12 00:46:43.584280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.941 [2024-07-12 00:46:43.848063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.941 [2024-07-12 00:46:43.848124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.941 [2024-07-12 00:46:43.848156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.941 [2024-07-12 00:46:43.848170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.941 [2024-07-12 00:46:43.848181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.941 [2024-07-12 00:46:43.848305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.941 [2024-07-12 00:46:43.848809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.941 [2024-07-12 00:46:43.849210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.941 [2024-07-12 00:46:43.849222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:39.508 00:46:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:27:40.076 00:46:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:27:40.076 00:46:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:40.334 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:27:40.334 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:40.594 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:40.594 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:27:40.594 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:40.594 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:40.594 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:40.853 [2024-07-12 00:46:45.726389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.853 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.112 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:41.112 00:46:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.371 00:46:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:41.371 00:46:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:41.938 00:46:46 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.938 [2024-07-12 00:46:46.857155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.197 00:46:46 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.197 00:46:47 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:42.197 00:46:47 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:42.197 00:46:47 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:42.197 00:46:47 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:43.576 Initializing NVMe Controllers 00:27:43.576 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:43.576 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:43.576 Initialization complete. Launching workers. 00:27:43.576 ======================================================== 00:27:43.576 Latency(us) 00:27:43.576 Device Information : IOPS MiB/s Average min max 00:27:43.576 PCIE (0000:00:10.0) NSID 1 from core 0: 20297.00 79.29 1575.85 377.70 8253.10 00:27:43.576 ======================================================== 00:27:43.576 Total : 20297.00 79.29 1575.85 377.70 8253.10 00:27:43.576 00:27:43.576 00:46:48 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:44.947 Initializing NVMe Controllers 00:27:44.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:44.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:44.947 Initialization complete. Launching workers. 00:27:44.947 ======================================================== 00:27:44.947 Latency(us) 00:27:44.947 Device Information : IOPS MiB/s Average min max 00:27:44.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2311.63 9.03 432.24 158.59 4503.85 00:27:44.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8094.97 4981.00 12102.78 00:27:44.947 ======================================================== 00:27:44.947 Total : 2436.13 9.52 823.83 158.59 12102.78 00:27:44.947 00:27:44.947 00:46:49 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.318 Initializing NVMe Controllers 00:27:46.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:46.318 Initialization complete. Launching workers. 00:27:46.318 ======================================================== 00:27:46.318 Latency(us) 00:27:46.318 Device Information : IOPS MiB/s Average min max 00:27:46.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6508.01 25.42 4922.28 1071.18 10292.53 00:27:46.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2697.10 10.54 11974.64 5794.67 19977.61 00:27:46.318 ======================================================== 00:27:46.318 Total : 9205.11 35.96 6988.62 1071.18 19977.61 00:27:46.318 00:27:46.576 00:46:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:27:46.576 00:46:51 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:49.858 Initializing NVMe Controllers 00:27:49.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.858 Controller IO queue size 128, less than required. 00:27:49.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:49.858 Controller IO queue size 128, less than required. 00:27:49.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:49.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:49.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:49.858 Initialization complete. Launching workers. 00:27:49.858 ======================================================== 00:27:49.858 Latency(us) 00:27:49.858 Device Information : IOPS MiB/s Average min max 00:27:49.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1027.19 256.80 131358.55 87039.84 334870.60 00:27:49.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 497.87 124.47 272199.00 152248.16 485383.08 00:27:49.858 ======================================================== 00:27:49.858 Total : 1525.06 381.26 177336.92 87039.84 485383.08 00:27:49.858 00:27:49.858 00:46:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:49.858 Initializing NVMe Controllers 00:27:49.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.858 Controller IO queue size 128, less than required. 00:27:49.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:49.858 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:49.858 Controller IO queue size 128, less than required. 00:27:49.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:49.858 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:27:49.858 WARNING: Some requested NVMe devices were skipped 00:27:49.858 No valid NVMe controllers or AIO or URING devices found 00:27:49.858 00:46:54 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:53.156 Initializing NVMe Controllers 00:27:53.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.156 Controller IO queue size 128, less than required. 00:27:53.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.156 Controller IO queue size 128, less than required. 00:27:53.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:53.156 Initialization complete. Launching workers. 00:27:53.156 00:27:53.156 ==================== 00:27:53.156 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:53.156 TCP transport: 00:27:53.156 polls: 3867 00:27:53.156 idle_polls: 2020 00:27:53.156 sock_completions: 1847 00:27:53.156 nvme_completions: 3873 00:27:53.156 submitted_requests: 5904 00:27:53.156 queued_requests: 1 00:27:53.156 00:27:53.156 ==================== 00:27:53.156 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:53.156 TCP transport: 00:27:53.156 polls: 6758 00:27:53.156 idle_polls: 4863 00:27:53.156 sock_completions: 1895 00:27:53.156 nvme_completions: 3781 00:27:53.156 submitted_requests: 5626 00:27:53.156 queued_requests: 1 00:27:53.156 ======================================================== 00:27:53.156 Latency(us) 00:27:53.156 Device Information : IOPS MiB/s Average min max 00:27:53.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 967.33 241.83 140697.51 71816.23 439829.40 00:27:53.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 944.34 236.09 138563.52 81270.52 384975.61 00:27:53.156 ======================================================== 00:27:53.156 Total : 1911.67 477.92 139643.34 71816.23 439829.40 00:27:53.156 00:27:53.156 00:46:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:53.156 00:46:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.156 00:46:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:53.156 00:46:57 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:27:53.156 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b4ace718-29f1-4124-9ebc-6b08f41d3e8f 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b4ace718-29f1-4124-9ebc-6b08f41d3e8f 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=b4ace718-29f1-4124-9ebc-6b08f41d3e8f 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:53.430 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:53.688 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:53.688 { 00:27:53.688 "base_bdev": "Nvme0n1", 00:27:53.688 "block_size": 4096, 00:27:53.688 "cluster_size": 4194304, 00:27:53.688 "free_clusters": 1278, 00:27:53.688 "name": "lvs_0", 00:27:53.688 "total_data_clusters": 1278, 00:27:53.688 "uuid": "b4ace718-29f1-4124-9ebc-6b08f41d3e8f" 00:27:53.688 } 00:27:53.688 ]' 00:27:53.688 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b4ace718-29f1-4124-9ebc-6b08f41d3e8f") .free_clusters' 00:27:53.688 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:27:53.688 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b4ace718-29f1-4124-9ebc-6b08f41d3e8f") .cluster_size' 00:27:53.946 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:53.946 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:27:53.946 5112 00:27:53.946 00:46:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:27:53.946 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:27:53.946 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b4ace718-29f1-4124-9ebc-6b08f41d3e8f lbd_0 5112 00:27:54.204 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=55ce646d-0654-4796-8327-7e46d7f9ae40 00:27:54.204 00:46:58 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 55ce646d-0654-4796-8327-7e46d7f9ae40 lvs_n_0 00:27:54.463 00:46:59 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=25ce3633-f95c-4bd4-a035-3e67d30e57aa 00:27:54.464 00:46:59 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 25ce3633-f95c-4bd4-a035-3e67d30e57aa 00:27:54.464 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=25ce3633-f95c-4bd4-a035-3e67d30e57aa 00:27:54.464 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:54.464 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:54.464 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:54.723 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:54.982 { 00:27:54.982 "base_bdev": "Nvme0n1", 00:27:54.982 "block_size": 4096, 00:27:54.982 "cluster_size": 4194304, 00:27:54.982 "free_clusters": 0, 00:27:54.982 "name": "lvs_0", 00:27:54.982 "total_data_clusters": 1278, 00:27:54.982 "uuid": "b4ace718-29f1-4124-9ebc-6b08f41d3e8f" 00:27:54.982 }, 00:27:54.982 { 00:27:54.982 "base_bdev": "55ce646d-0654-4796-8327-7e46d7f9ae40", 00:27:54.982 "block_size": 4096, 00:27:54.982 "cluster_size": 4194304, 00:27:54.982 "free_clusters": 1276, 00:27:54.982 "name": "lvs_n_0", 00:27:54.982 "total_data_clusters": 1276, 00:27:54.982 "uuid": "25ce3633-f95c-4bd4-a035-3e67d30e57aa" 00:27:54.982 } 00:27:54.982 ]' 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="25ce3633-f95c-4bd4-a035-3e67d30e57aa") .free_clusters' 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="25ce3633-f95c-4bd4-a035-3e67d30e57aa") .cluster_size' 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:27:54.982 5104 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:27:54.982 00:46:59 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 25ce3633-f95c-4bd4-a035-3e67d30e57aa lbd_nest_0 5104 00:27:55.240 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c9263183-df5a-42ab-90c8-4ab9c6a447f6 00:27:55.240 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.499 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:55.499 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c9263183-df5a-42ab-90c8-4ab9c6a447f6 00:27:55.758 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.016 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:56.016 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:56.016 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:56.016 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:56.016 00:47:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.581 Initializing NVMe Controllers 00:27:56.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.581 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:27:56.581 WARNING: Some requested NVMe devices were skipped 00:27:56.581 No valid NVMe controllers or AIO or URING devices found 00:27:56.581 00:47:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:56.581 00:47:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.776 Initializing NVMe Controllers 00:28:08.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:08.776 Initialization complete. Launching workers. 00:28:08.776 ======================================================== 00:28:08.776 Latency(us) 00:28:08.776 Device Information : IOPS MiB/s Average min max 00:28:08.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 703.22 87.90 1420.33 475.69 7820.47 00:28:08.776 ======================================================== 00:28:08.776 Total : 703.22 87.90 1420.33 475.69 7820.47 00:28:08.776 00:28:08.776 00:47:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:08.776 00:47:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:08.776 00:47:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.776 Initializing NVMe Controllers 00:28:08.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.776 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:08.776 WARNING: Some requested NVMe devices were skipped 00:28:08.776 No valid NVMe controllers or AIO or URING devices found 00:28:08.776 00:47:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:08.776 00:47:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.741 Initializing NVMe Controllers 00:28:18.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.741 Initialization complete. Launching workers. 00:28:18.741 ======================================================== 00:28:18.741 Latency(us) 00:28:18.741 Device Information : IOPS MiB/s Average min max 00:28:18.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1136.50 142.06 28187.16 8006.40 279014.38 00:28:18.741 ======================================================== 00:28:18.741 Total : 1136.50 142.06 28187.16 8006.40 279014.38 00:28:18.741 00:28:18.741 00:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:18.741 00:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:18.741 00:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.741 Initializing NVMe Controllers 00:28:18.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.741 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:28:18.741 WARNING: Some requested NVMe devices were skipped 00:28:18.741 No valid NVMe controllers or AIO or URING devices found 00:28:18.741 00:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:18.741 00:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.706 Initializing NVMe Controllers 00:28:28.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.706 Controller IO queue size 128, less than required. 00:28:28.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.706 Initialization complete. Launching workers. 00:28:28.706 ======================================================== 00:28:28.706 Latency(us) 00:28:28.706 Device Information : IOPS MiB/s Average min max 00:28:28.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3141.27 392.66 40805.68 9731.70 99142.14 00:28:28.706 ======================================================== 00:28:28.706 Total : 3141.27 392.66 40805.68 9731.70 99142.14 00:28:28.706 00:28:28.706 00:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.963 00:47:33 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c9263183-df5a-42ab-90c8-4ab9c6a447f6 00:28:29.530 00:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:29.788 00:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55ce646d-0654-4796-8327-7e46d7f9ae40 00:28:29.788 00:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:30.352 00:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:30.352 00:47:34 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:30.352 00:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.352 00:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.352 rmmod nvme_tcp 00:28:30.352 rmmod nvme_fabrics 00:28:30.352 rmmod nvme_keyring 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 97139 ']' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 97139 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 97139 ']' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 97139 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97139 00:28:30.352 killing process with pid 97139 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97139' 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 97139 00:28:30.352 00:47:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 97139 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:32.885 00:28:32.885 real 0m54.911s 00:28:32.885 user 3m25.272s 00:28:32.885 sys 0m11.855s 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.885 00:47:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:32.885 ************************************ 00:28:32.885 END TEST nvmf_perf 00:28:32.885 ************************************ 00:28:32.885 00:47:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:32.885 00:47:37 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:32.885 00:47:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:32.885 00:47:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.885 00:47:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.885 ************************************ 00:28:32.885 START TEST nvmf_fio_host 00:28:32.885 ************************************ 00:28:32.885 00:47:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:33.143 * Looking for test storage... 00:28:33.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.143 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:33.144 Cannot find device "nvmf_tgt_br" 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:33.144 Cannot find device "nvmf_tgt_br2" 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:33.144 Cannot find device "nvmf_tgt_br" 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:33.144 Cannot find device "nvmf_tgt_br2" 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:33.144 00:47:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:33.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:33.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:33.144 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:33.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:28:33.401 00:28:33.401 --- 10.0.0.2 ping statistics --- 00:28:33.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.401 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:33.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:33.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:28:33.401 00:28:33.401 --- 10.0.0.3 ping statistics --- 00:28:33.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.401 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:33.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:28:33.401 00:28:33.401 --- 10.0.0.1 ping statistics --- 00:28:33.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.401 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=98140 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 98140 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 98140 ']' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.401 00:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.656 [2024-07-12 00:47:38.362380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:33.656 [2024-07-12 00:47:38.362599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.656 [2024-07-12 00:47:38.542806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.913 [2024-07-12 00:47:38.772802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.914 [2024-07-12 00:47:38.772924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.914 [2024-07-12 00:47:38.772944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.914 [2024-07-12 00:47:38.772961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.914 [2024-07-12 00:47:38.772974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.914 [2024-07-12 00:47:38.773224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.914 [2024-07-12 00:47:38.773425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.914 [2024-07-12 00:47:38.774358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.914 [2024-07-12 00:47:38.774363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.478 00:47:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.478 00:47:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:28:34.478 00:47:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:34.735 [2024-07-12 00:47:39.459653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.735 00:47:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:34.735 00:47:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.735 00:47:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.735 00:47:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:34.992 Malloc1 00:28:34.992 00:47:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.248 00:47:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:35.505 00:47:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.763 [2024-07-12 00:47:40.569647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.763 00:47:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:36.021 00:47:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:36.350 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:36.350 fio-3.35 00:28:36.350 Starting 1 thread 00:28:38.873 00:28:38.873 test: (groupid=0, jobs=1): err= 0: pid=98259: Fri Jul 12 00:47:43 2024 00:28:38.873 read: IOPS=6682, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2009msec) 00:28:38.873 slat (usec): min=2, max=339, avg= 4.06, stdev= 4.33 00:28:38.873 clat (usec): min=3742, max=18316, avg=10074.33, stdev=1204.25 00:28:38.873 lat (usec): min=3779, max=18323, avg=10078.38, stdev=1204.37 00:28:38.873 clat percentiles (usec): 00:28:38.873 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:28:38.873 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:28:38.873 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11863], 00:28:38.873 | 99.00th=[15008], 99.50th=[16057], 99.90th=[17433], 99.95th=[17695], 00:28:38.873 | 99.99th=[18220] 00:28:38.873 bw ( KiB/s): min=24742, max=27528, per=99.91%, avg=26707.50, stdev=1316.85, samples=4 00:28:38.873 iops : min= 6185, max= 6882, avg=6676.75, stdev=329.46, samples=4 00:28:38.873 write: IOPS=6687, BW=26.1MiB/s (27.4MB/s)(52.5MiB/2009msec); 0 zone resets 00:28:38.873 slat (usec): min=2, max=264, avg= 4.25, stdev= 3.16 00:28:38.873 clat (usec): min=2614, max=17310, avg=8939.14, stdev=917.76 00:28:38.873 lat (usec): min=2635, max=17314, avg=8943.40, stdev=917.68 00:28:38.873 clat percentiles (usec): 00:28:38.873 | 1.00th=[ 6259], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8455], 00:28:38.873 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:28:38.873 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:28:38.873 | 99.00th=[11600], 99.50th=[12518], 99.90th=[15795], 99.95th=[16319], 00:28:38.873 | 99.99th=[17171] 00:28:38.873 bw ( KiB/s): min=26043, max=27464, per=99.94%, avg=26736.75, stdev=599.57, samples=4 00:28:38.873 iops : min= 6510, max= 6866, avg=6684.00, stdev=150.18, samples=4 00:28:38.873 lat (msec) : 4=0.06%, 10=73.06%, 20=26.88% 00:28:38.873 cpu : usr=67.48%, sys=21.91%, ctx=18, majf=0, minf=1538 00:28:38.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:38.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:38.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:38.873 issued rwts: total=13426,13436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:38.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:38.873 00:28:38.873 Run status group 0 (all jobs): 00:28:38.873 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2009-2009msec 00:28:38.873 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.5MiB (55.0MB), run=2009-2009msec 00:28:38.873 ----------------------------------------------------- 00:28:38.873 Suppressions used: 00:28:38.873 count bytes template 00:28:38.873 1 57 /usr/src/fio/parse.c 00:28:38.873 1 8 libtcmalloc_minimal.so 00:28:38.873 ----------------------------------------------------- 00:28:38.873 00:28:38.873 00:47:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:38.874 00:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:39.130 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:39.130 fio-3.35 00:28:39.130 Starting 1 thread 00:28:41.653 00:28:41.653 test: (groupid=0, jobs=1): err= 0: pid=98301: Fri Jul 12 00:47:46 2024 00:28:41.653 read: IOPS=5742, BW=89.7MiB/s (94.1MB/s)(180MiB/2008msec) 00:28:41.653 slat (usec): min=3, max=139, avg= 5.27, stdev= 2.62 00:28:41.653 clat (usec): min=3584, max=26296, avg=12975.27, stdev=3231.45 00:28:41.653 lat (usec): min=3589, max=26301, avg=12980.55, stdev=3231.85 00:28:41.653 clat percentiles (usec): 00:28:41.653 | 1.00th=[ 6521], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10159], 00:28:41.653 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12911], 60.00th=[13566], 00:28:41.653 | 70.00th=[14353], 80.00th=[15664], 90.00th=[17171], 95.00th=[18744], 00:28:41.653 | 99.00th=[21365], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:28:41.653 | 99.99th=[23987] 00:28:41.653 bw ( KiB/s): min=44224, max=49856, per=51.71%, avg=47504.00, stdev=2381.58, samples=4 00:28:41.653 iops : min= 2764, max= 3116, avg=2969.00, stdev=148.85, samples=4 00:28:41.653 write: IOPS=3340, BW=52.2MiB/s (54.7MB/s)(96.7MiB/1853msec); 0 zone resets 00:28:41.653 slat (usec): min=36, max=238, avg=42.94, stdev= 8.03 00:28:41.653 clat (usec): min=9471, max=33561, avg=16492.36, stdev=3080.95 00:28:41.653 lat (usec): min=9510, max=33600, avg=16535.30, stdev=3081.61 00:28:41.653 clat percentiles (usec): 00:28:41.653 | 1.00th=[10552], 5.00th=[11863], 10.00th=[12649], 20.00th=[13829], 00:28:41.653 | 30.00th=[14746], 40.00th=[15533], 50.00th=[16450], 60.00th=[16909], 00:28:41.653 | 70.00th=[17695], 80.00th=[19006], 90.00th=[20579], 95.00th=[21627], 00:28:41.653 | 99.00th=[25035], 99.50th=[26346], 99.90th=[27395], 99.95th=[28967], 00:28:41.653 | 99.99th=[33817] 00:28:41.653 bw ( KiB/s): min=46528, max=51776, per=92.65%, avg=49520.00, stdev=2188.12, samples=4 00:28:41.653 iops : min= 2908, max= 3236, avg=3095.00, stdev=136.76, samples=4 00:28:41.653 lat (msec) : 4=0.05%, 10=12.26%, 20=81.22%, 50=6.47% 00:28:41.653 cpu : usr=73.94%, sys=17.39%, ctx=5, majf=0, minf=2026 00:28:41.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:41.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.653 issued rwts: total=11530,6190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.653 00:28:41.653 Run status group 0 (all jobs): 00:28:41.653 READ: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (189MB), run=2008-2008msec 00:28:41.653 WRITE: bw=52.2MiB/s (54.7MB/s), 52.2MiB/s-52.2MiB/s (54.7MB/s-54.7MB/s), io=96.7MiB (101MB), run=1853-1853msec 00:28:41.653 ----------------------------------------------------- 00:28:41.653 Suppressions used: 00:28:41.653 count bytes template 00:28:41.653 1 57 /usr/src/fio/parse.c 00:28:41.653 598 57408 /usr/src/fio/iolog.c 00:28:41.653 1 8 libtcmalloc_minimal.so 00:28:41.653 ----------------------------------------------------- 00:28:41.653 00:28:41.653 00:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:41.909 00:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:28:42.166 Nvme0n1 00:28:42.166 00:47:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:42.424 00:47:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d960ab7e-2733-4753-ad2a-18fc036a7a0c 00:28:42.424 00:47:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d960ab7e-2733-4753-ad2a-18fc036a7a0c 00:28:42.424 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d960ab7e-2733-4753-ad2a-18fc036a7a0c 00:28:42.424 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:42.425 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:42.425 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:42.425 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:42.682 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:42.682 { 00:28:42.682 "base_bdev": "Nvme0n1", 00:28:42.682 "block_size": 4096, 00:28:42.682 "cluster_size": 1073741824, 00:28:42.682 "free_clusters": 4, 00:28:42.682 "name": "lvs_0", 00:28:42.682 "total_data_clusters": 4, 00:28:42.682 "uuid": "d960ab7e-2733-4753-ad2a-18fc036a7a0c" 00:28:42.682 } 00:28:42.682 ]' 00:28:42.682 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d960ab7e-2733-4753-ad2a-18fc036a7a0c") .free_clusters' 00:28:42.682 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:28:42.682 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d960ab7e-2733-4753-ad2a-18fc036a7a0c") .cluster_size' 00:28:42.939 4096 00:28:42.939 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:42.939 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:28:42.939 00:47:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:28:42.939 00:47:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:28:42.939 29ed1c6c-bcf8-4292-9a96-3a06bded40c5 00:28:43.197 00:47:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:43.197 00:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:43.761 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:43.762 00:47:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:44.019 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:44.019 fio-3.35 00:28:44.019 Starting 1 thread 00:28:46.543 00:28:46.543 test: (groupid=0, jobs=1): err= 0: pid=98451: Fri Jul 12 00:47:51 2024 00:28:46.543 read: IOPS=4632, BW=18.1MiB/s (19.0MB/s)(36.4MiB/2011msec) 00:28:46.543 slat (usec): min=2, max=229, avg= 3.34, stdev= 3.19 00:28:46.543 clat (usec): min=5602, max=24019, avg=14516.05, stdev=1295.53 00:28:46.543 lat (usec): min=5608, max=24022, avg=14519.39, stdev=1295.38 00:28:46.543 clat percentiles (usec): 00:28:46.543 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:28:46.543 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:28:46.543 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:28:46.543 | 99.00th=[17695], 99.50th=[18220], 99.90th=[22152], 99.95th=[23725], 00:28:46.543 | 99.99th=[23987] 00:28:46.543 bw ( KiB/s): min=17768, max=18848, per=99.84%, avg=18500.00, stdev=508.09, samples=4 00:28:46.543 iops : min= 4442, max= 4712, avg=4625.00, stdev=127.02, samples=4 00:28:46.543 write: IOPS=4631, BW=18.1MiB/s (19.0MB/s)(36.4MiB/2011msec); 0 zone resets 00:28:46.543 slat (usec): min=2, max=170, avg= 3.59, stdev= 2.78 00:28:46.543 clat (usec): min=2535, max=25022, avg=12991.95, stdev=1225.46 00:28:46.543 lat (usec): min=2548, max=25026, avg=12995.55, stdev=1225.32 00:28:46.543 clat percentiles (usec): 00:28:46.543 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:28:46.543 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:28:46.543 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:28:46.543 | 99.00th=[15664], 99.50th=[16188], 99.90th=[22152], 99.95th=[23725], 00:28:46.543 | 99.99th=[25035] 00:28:46.543 bw ( KiB/s): min=18336, max=18712, per=99.89%, avg=18504.00, stdev=155.54, samples=4 00:28:46.543 iops : min= 4584, max= 4678, avg=4626.00, stdev=38.88, samples=4 00:28:46.543 lat (msec) : 4=0.03%, 10=0.35%, 20=99.38%, 50=0.24% 00:28:46.543 cpu : usr=73.13%, sys=20.65%, ctx=560, majf=0, minf=1538 00:28:46.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:46.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:46.543 issued rwts: total=9316,9313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:46.543 00:28:46.543 Run status group 0 (all jobs): 00:28:46.543 READ: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=36.4MiB (38.2MB), run=2011-2011msec 00:28:46.543 WRITE: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=36.4MiB (38.1MB), run=2011-2011msec 00:28:46.543 ----------------------------------------------------- 00:28:46.543 Suppressions used: 00:28:46.543 count bytes template 00:28:46.543 1 58 /usr/src/fio/parse.c 00:28:46.543 1 8 libtcmalloc_minimal.so 00:28:46.543 ----------------------------------------------------- 00:28:46.543 00:28:46.543 00:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:46.802 00:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fdf7a1d3-8890-43fa-a9bd-2629b0147305 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fdf7a1d3-8890-43fa-a9bd-2629b0147305 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fdf7a1d3-8890-43fa-a9bd-2629b0147305 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:47.060 00:47:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:47.624 { 00:28:47.624 "base_bdev": "Nvme0n1", 00:28:47.624 "block_size": 4096, 00:28:47.624 "cluster_size": 1073741824, 00:28:47.624 "free_clusters": 0, 00:28:47.624 "name": "lvs_0", 00:28:47.624 "total_data_clusters": 4, 00:28:47.624 "uuid": "d960ab7e-2733-4753-ad2a-18fc036a7a0c" 00:28:47.624 }, 00:28:47.624 { 00:28:47.624 "base_bdev": "29ed1c6c-bcf8-4292-9a96-3a06bded40c5", 00:28:47.624 "block_size": 4096, 00:28:47.624 "cluster_size": 4194304, 00:28:47.624 "free_clusters": 1022, 00:28:47.624 "name": "lvs_n_0", 00:28:47.624 "total_data_clusters": 1022, 00:28:47.624 "uuid": "fdf7a1d3-8890-43fa-a9bd-2629b0147305" 00:28:47.624 } 00:28:47.624 ]' 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fdf7a1d3-8890-43fa-a9bd-2629b0147305") .free_clusters' 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fdf7a1d3-8890-43fa-a9bd-2629b0147305") .cluster_size' 00:28:47.624 4088 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:47.624 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:28:47.625 00:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:28:47.625 00:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:28:47.882 1db9032b-299c-4741-9649-82088a866f97 00:28:47.882 00:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:48.142 00:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:48.399 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:48.655 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:48.655 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:48.655 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:28:48.655 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:48.655 00:47:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:48.655 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:48.655 fio-3.35 00:28:48.655 Starting 1 thread 00:28:51.180 00:28:51.180 test: (groupid=0, jobs=1): err= 0: pid=98559: Fri Jul 12 00:47:55 2024 00:28:51.180 read: IOPS=4584, BW=17.9MiB/s (18.8MB/s)(36.0MiB/2010msec) 00:28:51.180 slat (usec): min=2, max=226, avg= 3.51, stdev= 3.22 00:28:51.180 clat (usec): min=8205, max=25881, avg=14774.54, stdev=1616.85 00:28:51.180 lat (usec): min=8217, max=25884, avg=14778.05, stdev=1616.73 00:28:51.180 clat percentiles (usec): 00:28:51.180 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12911], 20.00th=[13566], 00:28:51.180 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:28:51.180 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16581], 95.00th=[17695], 00:28:51.180 | 99.00th=[19792], 99.50th=[20579], 99.90th=[22414], 99.95th=[24511], 00:28:51.180 | 99.99th=[25822] 00:28:51.180 bw ( KiB/s): min=17048, max=18880, per=99.70%, avg=18284.00, stdev=839.57, samples=4 00:28:51.180 iops : min= 4262, max= 4720, avg=4571.00, stdev=209.89, samples=4 00:28:51.180 write: IOPS=4583, BW=17.9MiB/s (18.8MB/s)(36.0MiB/2010msec); 0 zone resets 00:28:51.180 slat (usec): min=2, max=195, avg= 3.67, stdev= 2.48 00:28:51.180 clat (usec): min=4533, max=22332, avg=12981.45, stdev=1340.51 00:28:51.180 lat (usec): min=4548, max=22335, avg=12985.12, stdev=1340.49 00:28:51.180 clat percentiles (usec): 00:28:51.180 | 1.00th=[ 9896], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:28:51.180 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:28:51.180 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:28:51.180 | 99.00th=[16909], 99.50th=[18220], 99.90th=[20841], 99.95th=[21890], 00:28:51.180 | 99.99th=[22414] 00:28:51.180 bw ( KiB/s): min=18032, max=18664, per=99.91%, avg=18316.00, stdev=262.30, samples=4 00:28:51.180 iops : min= 4508, max= 4666, avg=4579.00, stdev=65.57, samples=4 00:28:51.180 lat (msec) : 10=0.65%, 20=98.84%, 50=0.50% 00:28:51.180 cpu : usr=73.77%, sys=20.01%, ctx=7, majf=0, minf=1539 00:28:51.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:51.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:51.180 issued rwts: total=9215,9212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:51.180 00:28:51.180 Run status group 0 (all jobs): 00:28:51.180 READ: bw=17.9MiB/s (18.8MB/s), 17.9MiB/s-17.9MiB/s (18.8MB/s-18.8MB/s), io=36.0MiB (37.7MB), run=2010-2010msec 00:28:51.180 WRITE: bw=17.9MiB/s (18.8MB/s), 17.9MiB/s-17.9MiB/s (18.8MB/s-18.8MB/s), io=36.0MiB (37.7MB), run=2010-2010msec 00:28:51.437 ----------------------------------------------------- 00:28:51.437 Suppressions used: 00:28:51.437 count bytes template 00:28:51.437 1 58 /usr/src/fio/parse.c 00:28:51.437 1 8 libtcmalloc_minimal.so 00:28:51.437 ----------------------------------------------------- 00:28:51.437 00:28:51.437 00:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:51.694 00:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:28:51.694 00:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:51.950 00:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:52.207 00:47:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:52.464 00:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:52.721 00:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:53.284 00:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:53.284 00:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:53.284 00:47:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:53.284 00:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.284 00:47:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.284 rmmod nvme_tcp 00:28:53.284 rmmod nvme_fabrics 00:28:53.284 rmmod nvme_keyring 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 98140 ']' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 98140 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 98140 ']' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 98140 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98140 00:28:53.284 killing process with pid 98140 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98140' 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 98140 00:28:53.284 00:47:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 98140 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:54.743 00:28:54.743 real 0m21.766s 00:28:54.743 user 1m33.614s 00:28:54.743 sys 0m4.704s 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:54.743 00:47:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.743 ************************************ 00:28:54.743 END TEST nvmf_fio_host 00:28:54.743 ************************************ 00:28:54.743 00:47:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:54.743 00:47:59 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:54.743 00:47:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:54.743 00:47:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.743 00:47:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:54.743 ************************************ 00:28:54.743 START TEST nvmf_failover 00:28:54.743 ************************************ 00:28:54.743 00:47:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:54.743 * Looking for test storage... 00:28:54.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:54.743 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:54.743 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:54.743 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.743 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.744 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:55.003 Cannot find device "nvmf_tgt_br" 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:55.003 Cannot find device "nvmf_tgt_br2" 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:28:55.003 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:55.004 Cannot find device "nvmf_tgt_br" 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:55.004 Cannot find device "nvmf_tgt_br2" 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:55.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:55.004 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:55.263 00:47:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:55.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:28:55.263 00:28:55.263 --- 10.0.0.2 ping statistics --- 00:28:55.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.263 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:55.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:55.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:28:55.263 00:28:55.263 --- 10.0.0.3 ping statistics --- 00:28:55.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.263 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:55.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:28:55.263 00:28:55.263 --- 10.0.0.1 ping statistics --- 00:28:55.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.263 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=98849 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 98849 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 98849 ']' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.263 00:48:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:55.263 [2024-07-12 00:48:00.175711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:55.263 [2024-07-12 00:48:00.175880] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.521 [2024-07-12 00:48:00.355888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.780 [2024-07-12 00:48:00.622035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.780 [2024-07-12 00:48:00.622107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.780 [2024-07-12 00:48:00.622124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.780 [2024-07-12 00:48:00.622139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.780 [2024-07-12 00:48:00.622150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.780 [2024-07-12 00:48:00.622511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.780 [2024-07-12 00:48:00.623696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.780 [2024-07-12 00:48:00.623699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.346 00:48:01 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:56.605 [2024-07-12 00:48:01.495702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.605 00:48:01 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:57.172 Malloc0 00:28:57.172 00:48:01 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.430 00:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.689 00:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.947 [2024-07-12 00:48:02.631546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.947 00:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:57.947 [2024-07-12 00:48:02.875883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:58.217 00:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:58.217 [2024-07-12 00:48:03.124425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=98961 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 98961 /var/tmp/bdevperf.sock 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 98961 ']' 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.217 00:48:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:59.594 00:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.594 00:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:28:59.594 00:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:59.594 NVMe0n1 00:28:59.594 00:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:00.161 00:29:00.161 00:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=99013 00:29:00.161 00:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:00.161 00:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:01.096 00:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.355 [2024-07-12 00:48:06.132471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.132986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 [2024-07-12 00:48:06.133249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:29:01.355 00:48:06 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:04.641 00:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.641 00:29:04.641 00:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:04.900 [2024-07-12 00:48:09.778277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.778982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.900 [2024-07-12 00:48:09.779009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.901 [2024-07-12 00:48:09.779021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.901 [2024-07-12 00:48:09.779031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.901 [2024-07-12 00:48:09.779043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:29:04.901 00:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:08.183 00:48:12 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.183 [2024-07-12 00:48:13.089282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.183 00:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:09.559 00:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:09.559 [2024-07-12 00:48:14.390925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 [2024-07-12 00:48:14.391471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:29:09.559 00:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 99013 00:29:16.120 0 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 98961 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 98961 ']' 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 98961 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98961 00:29:16.120 killing process with pid 98961 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98961' 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 98961 00:29:16.120 00:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 98961 00:29:16.385 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:16.385 [2024-07-12 00:48:03.241491] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:16.385 [2024-07-12 00:48:03.241694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98961 ] 00:29:16.385 [2024-07-12 00:48:03.410755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.385 [2024-07-12 00:48:03.693862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.385 Running I/O for 15 seconds... 00:29:16.385 [2024-07-12 00:48:06.134896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.385 [2024-07-12 00:48:06.134992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.135964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.135986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.385 [2024-07-12 00:48:06.136006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.385 [2024-07-12 00:48:06.136035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.136999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.137964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.137988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.386 [2024-07-12 00:48:06.138009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.386 [2024-07-12 00:48:06.138032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.387 [2024-07-12 00:48:06.138814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.138963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.138983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.387 [2024-07-12 00:48:06.139869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.387 [2024-07-12 00:48:06.139891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.139911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.139933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.139952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.139975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.139995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:06.140919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.140982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.388 [2024-07-12 00:48:06.141004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.388 [2024-07-12 00:48:06.141023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58584 len:8 PRP1 0x0 PRP2 0x0 00:29:16.388 [2024-07-12 00:48:06.141043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.141344] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:29:16.388 [2024-07-12 00:48:06.141375] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:16.388 [2024-07-12 00:48:06.141482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.388 [2024-07-12 00:48:06.141513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.141536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.388 [2024-07-12 00:48:06.141556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.141576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.388 [2024-07-12 00:48:06.141596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.141616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.388 [2024-07-12 00:48:06.141635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:06.141654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.388 [2024-07-12 00:48:06.141743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:16.388 [2024-07-12 00:48:06.145989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.388 [2024-07-12 00:48:06.187335] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:16.388 [2024-07-12 00:48:09.780845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.388 [2024-07-12 00:48:09.781499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.388 [2024-07-12 00:48:09.781521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.781969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.781990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.389 [2024-07-12 00:48:09.782339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.782959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.782981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.783000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.783023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.389 [2024-07-12 00:48:09.783064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.389 [2024-07-12 00:48:09.783084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.783965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.783988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.390 [2024-07-12 00:48:09.784654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.390 [2024-07-12 00:48:09.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.390 [2024-07-12 00:48:09.784992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.785968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.785987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.786028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.391 [2024-07-12 00:48:09.786068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122760 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122768 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122776 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122784 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122792 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122800 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122808 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122816 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122824 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122832 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.391 [2024-07-12 00:48:09.786862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.391 [2024-07-12 00:48:09.786878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122840 len:8 PRP1 0x0 PRP2 0x0 00:29:16.391 [2024-07-12 00:48:09.786895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.391 [2024-07-12 00:48:09.786913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.392 [2024-07-12 00:48:09.786927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.392 [2024-07-12 00:48:09.786943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122848 len:8 PRP1 0x0 PRP2 0x0 00:29:16.392 [2024-07-12 00:48:09.786961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.786978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.392 [2024-07-12 00:48:09.786992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.392 [2024-07-12 00:48:09.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122856 len:8 PRP1 0x0 PRP2 0x0 00:29:16.392 [2024-07-12 00:48:09.787025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.392 [2024-07-12 00:48:09.787057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.392 [2024-07-12 00:48:09.787072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122864 len:8 PRP1 0x0 PRP2 0x0 00:29:16.392 [2024-07-12 00:48:09.787090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.392 [2024-07-12 00:48:09.787122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.392 [2024-07-12 00:48:09.787138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122872 len:8 PRP1 0x0 PRP2 0x0 00:29:16.392 [2024-07-12 00:48:09.787156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787433] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:29:16.392 [2024-07-12 00:48:09.787463] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:16.392 [2024-07-12 00:48:09.787539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:09.787568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:09.787609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:09.787659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:09.787707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:09.787725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.392 [2024-07-12 00:48:09.787784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:16.392 [2024-07-12 00:48:09.791894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.392 [2024-07-12 00:48:09.833404] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:16.392 [2024-07-12 00:48:14.385192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:14.385278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.385310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:14.385341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.385362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:14.385382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.385421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.392 [2024-07-12 00:48:14.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.385462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:16.392 [2024-07-12 00:48:14.392758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.392848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.392891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.392940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.392960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.392994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.393968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.392 [2024-07-12 00:48:14.393987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.392 [2024-07-12 00:48:14.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.393 [2024-07-12 00:48:14.394738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.394780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.394822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.394875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.394926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.394956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.393 [2024-07-12 00:48:14.395697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.393 [2024-07-12 00:48:14.395718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.395977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.395996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.396980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.396999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.394 [2024-07-12 00:48:14.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-12 00:48:14.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.394 [2024-07-12 00:48:14.397748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.397767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.397789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.397829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.397864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.397884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.397906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.397926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.397948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.397968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.397989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.395 [2024-07-12 00:48:14.398778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.398798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:29:16.395 [2024-07-12 00:48:14.398824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:16.395 [2024-07-12 00:48:14.398846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:16.395 [2024-07-12 00:48:14.398864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113720 len:8 PRP1 0x0 PRP2 0x0 00:29:16.395 [2024-07-12 00:48:14.398892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.395 [2024-07-12 00:48:14.399165] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:29:16.395 [2024-07-12 00:48:14.399193] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:16.395 [2024-07-12 00:48:14.399215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.395 [2024-07-12 00:48:14.399286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:16.395 [2024-07-12 00:48:14.403490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.395 [2024-07-12 00:48:14.449601] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:16.395 00:29:16.395 Latency(us) 00:29:16.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.395 Verification LBA range: start 0x0 length 0x4000 00:29:16.395 NVMe0n1 : 15.01 6394.02 24.98 255.11 0.00 19214.02 1146.88 28955.00 00:29:16.395 =================================================================================================================== 00:29:16.395 Total : 6394.02 24.98 255.11 0.00 19214.02 1146.88 28955.00 00:29:16.395 Received shutdown signal, test time was about 15.000000 seconds 00:29:16.395 00:29:16.395 Latency(us) 00:29:16.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.395 =================================================================================================================== 00:29:16.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:16.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=99217 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 99217 /var/tmp/bdevperf.sock 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 99217 ']' 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.395 00:48:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.794 00:48:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.794 00:48:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:17.794 00:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:17.794 [2024-07-12 00:48:22.543235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.794 00:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:18.064 [2024-07-12 00:48:22.863908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:18.064 00:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.334 NVMe0n1 00:29:18.334 00:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.593 00:29:18.852 00:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.111 00:29:19.111 00:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.111 00:48:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:19.369 00:48:24 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.627 00:48:24 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:22.913 00:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.913 00:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:22.913 00:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=99356 00:29:22.913 00:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:22.913 00:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 99356 00:29:24.290 0 00:29:24.290 00:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:24.290 [2024-07-12 00:48:21.350192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:24.290 [2024-07-12 00:48:21.350521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99217 ] 00:29:24.290 [2024-07-12 00:48:21.524913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.290 [2024-07-12 00:48:21.770037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.290 [2024-07-12 00:48:24.405267] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:24.290 [2024-07-12 00:48:24.405490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.290 [2024-07-12 00:48:24.405533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.290 [2024-07-12 00:48:24.405562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.290 [2024-07-12 00:48:24.405582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.290 [2024-07-12 00:48:24.405603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.290 [2024-07-12 00:48:24.405623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.290 [2024-07-12 00:48:24.405642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.290 [2024-07-12 00:48:24.405661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-12 00:48:24.405680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.291 [2024-07-12 00:48:24.405819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.291 [2024-07-12 00:48:24.405872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:24.291 [2024-07-12 00:48:24.412968] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:24.291 Running I/O for 1 seconds... 00:29:24.291 00:29:24.291 Latency(us) 00:29:24.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.291 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.291 Verification LBA range: start 0x0 length 0x4000 00:29:24.291 NVMe0n1 : 1.01 6205.91 24.24 0.00 0.00 20499.68 2919.33 17873.45 00:29:24.291 =================================================================================================================== 00:29:24.291 Total : 6205.91 24.24 0.00 0.00 20499.68 2919.33 17873.45 00:29:24.291 00:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.291 00:48:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:24.291 00:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.549 00:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.549 00:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:24.806 00:48:29 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.371 00:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 99217 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 99217 ']' 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 99217 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99217 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.653 killing process with pid 99217 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99217' 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 99217 00:29:28.653 00:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 99217 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.028 rmmod nvme_tcp 00:29:30.028 rmmod nvme_fabrics 00:29:30.028 rmmod nvme_keyring 00:29:30.028 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 98849 ']' 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 98849 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 98849 ']' 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 98849 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98849 00:29:30.286 killing process with pid 98849 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98849' 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 98849 00:29:30.286 00:48:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 98849 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:32.210 00:29:32.210 real 0m37.206s 00:29:32.210 user 2m21.864s 00:29:32.210 sys 0m5.197s 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.210 00:48:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:32.210 ************************************ 00:29:32.210 END TEST nvmf_failover 00:29:32.210 ************************************ 00:29:32.210 00:48:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:32.210 00:48:36 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:32.210 00:48:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:32.210 00:48:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.210 00:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.210 ************************************ 00:29:32.210 START TEST nvmf_host_discovery 00:29:32.210 ************************************ 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:32.210 * Looking for test storage... 00:29:32.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.210 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:32.211 Cannot find device "nvmf_tgt_br" 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:32.211 Cannot find device "nvmf_tgt_br2" 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:29:32.211 00:48:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:32.211 Cannot find device "nvmf_tgt_br" 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:32.211 Cannot find device "nvmf_tgt_br2" 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:32.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:32.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:32.211 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:32.469 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:32.469 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:32.469 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:32.469 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:32.469 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:32.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:29:32.470 00:29:32.470 --- 10.0.0.2 ping statistics --- 00:29:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.470 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:32.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:32.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:29:32.470 00:29:32.470 --- 10.0.0.3 ping statistics --- 00:29:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.470 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:32.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:29:32.470 00:29:32.470 --- 10.0.0.1 ping statistics --- 00:29:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.470 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=99678 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 99678 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 99678 ']' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.470 00:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 [2024-07-12 00:48:37.470417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:32.727 [2024-07-12 00:48:37.470607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.727 [2024-07-12 00:48:37.649185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.293 [2024-07-12 00:48:37.925093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.293 [2024-07-12 00:48:37.925167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.293 [2024-07-12 00:48:37.925185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.293 [2024-07-12 00:48:37.925201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.293 [2024-07-12 00:48:37.925213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.293 [2024-07-12 00:48:37.925256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.552 [2024-07-12 00:48:38.469235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.552 [2024-07-12 00:48:38.477344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.552 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 null0 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 null1 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=99729 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 99729 /tmp/host.sock 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 99729 ']' 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.810 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.810 00:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 [2024-07-12 00:48:38.630363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:33.810 [2024-07-12 00:48:38.630552] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99729 ] 00:29:34.069 [2024-07-12 00:48:38.798738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.327 [2024-07-12 00:48:39.046946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.894 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.895 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 [2024-07-12 00:48:39.950345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:35.154 00:48:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:29:35.413 00:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:35.671 [2024-07-12 00:48:40.604257] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:35.671 [2024-07-12 00:48:40.604312] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:35.671 [2024-07-12 00:48:40.604347] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.930 [2024-07-12 00:48:40.692528] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:35.930 [2024-07-12 00:48:40.756082] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:35.930 [2024-07-12 00:48:40.756145] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:36.497 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.497 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:36.497 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:36.498 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.762 [2024-07-12 00:48:41.535647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.762 [2024-07-12 00:48:41.536894] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:36.762 [2024-07-12 00:48:41.537116] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:36.762 [2024-07-12 00:48:41.622599] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:36.762 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.762 [2024-07-12 00:48:41.687050] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:36.762 [2024-07-12 00:48:41.687088] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:36.762 [2024-07-12 00:48:41.687102] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:37.026 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:37.026 00:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.961 [2024-07-12 00:48:42.833438] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:37.961 [2024-07-12 00:48:42.833650] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.961 [2024-07-12 00:48:42.833923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.961 [2024-07-12 00:48:42.834106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.961 [2024-07-12 00:48:42.834263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.961 [2024-07-12 00:48:42.834389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.961 [2024-07-12 00:48:42.834437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.961 [2024-07-12 00:48:42.834458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.961 [2024-07-12 00:48:42.834474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.961 [2024-07-12 00:48:42.834488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.961 [2024-07-12 00:48:42.834502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:37.961 [2024-07-12 00:48:42.843827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.961 [2024-07-12 00:48:42.853846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:37.961 [2024-07-12 00:48:42.854018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.961 [2024-07-12 00:48:42.854061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:37.961 [2024-07-12 00:48:42.854080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.961 [2024-07-12 00:48:42.854107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.961 [2024-07-12 00:48:42.854131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.961 [2024-07-12 00:48:42.854154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:37.961 [2024-07-12 00:48:42.854170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.961 [2024-07-12 00:48:42.854196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.961 [2024-07-12 00:48:42.863941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:37.961 [2024-07-12 00:48:42.864061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.961 [2024-07-12 00:48:42.864091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:37.961 [2024-07-12 00:48:42.864108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.961 [2024-07-12 00:48:42.864133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.961 [2024-07-12 00:48:42.864154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.961 [2024-07-12 00:48:42.864168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:37.961 [2024-07-12 00:48:42.864181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.961 [2024-07-12 00:48:42.864203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.961 [2024-07-12 00:48:42.874025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:37.961 [2024-07-12 00:48:42.874142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.961 [2024-07-12 00:48:42.874171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:37.961 [2024-07-12 00:48:42.874187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.961 [2024-07-12 00:48:42.874211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.961 [2024-07-12 00:48:42.874233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.961 [2024-07-12 00:48:42.874246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:37.961 [2024-07-12 00:48:42.874277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.961 [2024-07-12 00:48:42.874300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.961 [2024-07-12 00:48:42.884111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:37.961 [2024-07-12 00:48:42.884250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.961 [2024-07-12 00:48:42.884282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:37.961 [2024-07-12 00:48:42.884299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.961 [2024-07-12 00:48:42.884325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.961 [2024-07-12 00:48:42.884347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.961 [2024-07-12 00:48:42.884361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:37.961 [2024-07-12 00:48:42.884374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.961 [2024-07-12 00:48:42.884422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:37.961 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:37.962 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:37.962 [2024-07-12 00:48:42.894198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:37.962 [2024-07-12 00:48:42.894331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.962 [2024-07-12 00:48:42.894361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:37.962 [2024-07-12 00:48:42.894379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:37.962 [2024-07-12 00:48:42.894417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:37.962 [2024-07-12 00:48:42.894441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.962 [2024-07-12 00:48:42.894455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:37.962 [2024-07-12 00:48:42.894469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.962 [2024-07-12 00:48:42.894492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.220 [2024-07-12 00:48:42.904284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:38.220 [2024-07-12 00:48:42.904432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.220 [2024-07-12 00:48:42.904475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:38.220 [2024-07-12 00:48:42.904499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:38.220 [2024-07-12 00:48:42.904525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:38.220 [2024-07-12 00:48:42.904568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:38.220 [2024-07-12 00:48:42.904586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:38.221 [2024-07-12 00:48:42.904599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:38.221 [2024-07-12 00:48:42.904622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.221 [2024-07-12 00:48:42.914373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:38.221 [2024-07-12 00:48:42.914494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.221 [2024-07-12 00:48:42.914523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.2, port=4420 00:29:38.221 [2024-07-12 00:48:42.914540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:29:38.221 [2024-07-12 00:48:42.914564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:29:38.221 [2024-07-12 00:48:42.914585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:38.221 [2024-07-12 00:48:42.914599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:38.221 [2024-07-12 00:48:42.914612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:38.221 [2024-07-12 00:48:42.914634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.221 [2024-07-12 00:48:42.919765] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:38.221 [2024-07-12 00:48:42.919810] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.221 00:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:38.221 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.478 00:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.413 [2024-07-12 00:48:44.255402] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:39.413 [2024-07-12 00:48:44.255468] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:39.413 [2024-07-12 00:48:44.255502] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:39.413 [2024-07-12 00:48:44.342710] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:39.672 [2024-07-12 00:48:44.411750] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:39.672 [2024-07-12 00:48:44.411818] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.672 2024/07/12 00:48:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:29:39.672 request: 00:29:39.672 { 00:29:39.672 "method": "bdev_nvme_start_discovery", 00:29:39.672 "params": { 00:29:39.672 "name": "nvme", 00:29:39.672 "trtype": "tcp", 00:29:39.672 "traddr": "10.0.0.2", 00:29:39.672 "adrfam": "ipv4", 00:29:39.672 "trsvcid": "8009", 00:29:39.672 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:39.672 "wait_for_attach": true 00:29:39.672 } 00:29:39.672 } 00:29:39.672 Got JSON-RPC error response 00:29:39.672 GoRPCClient: error on JSON-RPC call 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:39.672 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.673 2024/07/12 00:48:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:29:39.673 request: 00:29:39.673 { 00:29:39.673 "method": "bdev_nvme_start_discovery", 00:29:39.673 "params": { 00:29:39.673 "name": "nvme_second", 00:29:39.673 "trtype": "tcp", 00:29:39.673 "traddr": "10.0.0.2", 00:29:39.673 "adrfam": "ipv4", 00:29:39.673 "trsvcid": "8009", 00:29:39.673 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:39.673 "wait_for_attach": true 00:29:39.673 } 00:29:39.673 } 00:29:39.673 Got JSON-RPC error response 00:29:39.673 GoRPCClient: error on JSON-RPC call 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:39.673 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.931 00:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.865 [2024-07-12 00:48:45.676618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-07-12 00:48:45.677076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:29:40.865 [2024-07-12 00:48:45.677158] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:40.865 [2024-07-12 00:48:45.677177] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:40.865 [2024-07-12 00:48:45.677195] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:41.802 [2024-07-12 00:48:46.676661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.802 [2024-07-12 00:48:46.676756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:29:41.802 [2024-07-12 00:48:46.676839] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:41.802 [2024-07-12 00:48:46.676871] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:41.802 [2024-07-12 00:48:46.676886] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:43.178 [2024-07-12 00:48:47.676240] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:43.178 2024/07/12 00:48:47 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:29:43.178 request: 00:29:43.178 { 00:29:43.178 "method": "bdev_nvme_start_discovery", 00:29:43.178 "params": { 00:29:43.178 "name": "nvme_second", 00:29:43.178 "trtype": "tcp", 00:29:43.178 "traddr": "10.0.0.2", 00:29:43.178 "adrfam": "ipv4", 00:29:43.178 "trsvcid": "8010", 00:29:43.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:43.178 "wait_for_attach": false, 00:29:43.178 "attach_timeout_ms": 3000 00:29:43.178 } 00:29:43.178 } 00:29:43.178 Got JSON-RPC error response 00:29:43.178 GoRPCClient: error on JSON-RPC call 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 99729 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.178 rmmod nvme_tcp 00:29:43.178 rmmod nvme_fabrics 00:29:43.178 rmmod nvme_keyring 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 99678 ']' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 99678 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 99678 ']' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 99678 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99678 00:29:43.178 killing process with pid 99678 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99678' 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 99678 00:29:43.178 00:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 99678 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:44.554 00:29:44.554 real 0m12.510s 00:29:44.554 user 0m24.189s 00:29:44.554 sys 0m1.959s 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 ************************************ 00:29:44.554 END TEST nvmf_host_discovery 00:29:44.554 ************************************ 00:29:44.554 00:48:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:44.554 00:48:49 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:44.554 00:48:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:44.554 00:48:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.554 00:48:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 ************************************ 00:29:44.554 START TEST nvmf_host_multipath_status 00:29:44.554 ************************************ 00:29:44.554 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:44.813 * Looking for test storage... 00:29:44.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:44.814 Cannot find device "nvmf_tgt_br" 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:44.814 Cannot find device "nvmf_tgt_br2" 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:44.814 Cannot find device "nvmf_tgt_br" 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:44.814 Cannot find device "nvmf_tgt_br2" 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:44.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:44.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:44.814 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:44.815 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:44.815 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:44.815 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:45.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:29:45.073 00:29:45.073 --- 10.0.0.2 ping statistics --- 00:29:45.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.073 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:45.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:45.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:29:45.073 00:29:45.073 --- 10.0.0.3 ping statistics --- 00:29:45.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.073 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:45.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:29:45.073 00:29:45.073 --- 10.0.0.1 ping statistics --- 00:29:45.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.073 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=100218 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 100218 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100218 ']' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:45.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:45.073 00:48:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:45.073 [2024-07-12 00:48:49.999166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:45.073 [2024-07-12 00:48:49.999353] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.332 [2024-07-12 00:48:50.181986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:45.589 [2024-07-12 00:48:50.491816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.589 [2024-07-12 00:48:50.491888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.589 [2024-07-12 00:48:50.491910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.589 [2024-07-12 00:48:50.491928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.589 [2024-07-12 00:48:50.491951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.589 [2024-07-12 00:48:50.492218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.589 [2024-07-12 00:48:50.492389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=100218 00:29:46.153 00:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:46.411 [2024-07-12 00:48:51.251713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.411 00:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:46.981 Malloc0 00:29:46.981 00:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:46.981 00:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.548 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.806 [2024-07-12 00:48:52.496584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.806 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.063 [2024-07-12 00:48:52.768919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=100326 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 100326 /var/tmp/bdevperf.sock 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100326 ']' 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.063 00:48:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:48.993 00:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:48.993 00:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:29:48.993 00:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:49.250 00:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:49.815 Nvme0n1 00:29:49.815 00:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:50.095 Nvme0n1 00:29:50.095 00:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:50.095 00:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:51.994 00:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:51.994 00:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:52.285 00:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:52.575 00:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:53.948 00:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:54.206 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:54.206 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:54.206 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.206 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:54.464 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:54.464 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:54.464 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:54.464 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.722 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:54.722 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:54.722 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:54.722 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.980 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:54.980 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:54.980 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:54.980 00:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.238 00:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.238 00:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:55.238 00:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:55.497 00:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:55.756 00:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.206 00:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:57.491 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:57.491 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:57.491 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.491 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:57.748 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:57.748 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:57.748 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.748 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:58.006 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.006 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:58.006 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:58.006 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.264 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.264 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:58.264 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.264 00:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:58.523 00:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.523 00:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:58.523 00:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:58.781 00:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:58.781 00:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.153 00:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:00.410 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:00.410 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:00.410 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.410 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:00.669 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.669 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:00.669 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.669 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:00.927 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.927 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:00.928 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.928 00:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:01.185 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.185 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:01.185 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:01.185 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:01.751 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.751 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:01.751 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:01.751 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:02.316 00:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:03.250 00:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:03.250 00:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:03.250 00:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.250 00:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:03.517 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.517 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:03.517 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.517 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:03.781 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:03.781 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:03.781 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.781 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:04.038 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.038 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:04.038 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.038 00:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:04.296 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.296 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:04.296 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.296 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:04.555 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.555 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:04.555 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.555 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:04.813 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:04.813 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:04.813 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:05.072 00:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:05.331 00:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:06.264 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:06.264 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:06.264 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.264 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:06.522 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:06.522 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:06.522 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.522 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:06.780 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:06.780 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:06.780 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:06.780 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.374 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:07.374 00:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:07.374 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.635 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:07.635 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:07.635 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:07.635 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:07.895 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:07.895 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:07.895 00:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:08.171 00:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:08.436 00:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.810 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:10.069 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.069 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:10.069 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.069 00:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:10.327 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.327 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:10.327 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.327 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:10.585 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.585 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:10.585 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:10.585 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.844 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:10.844 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:10.844 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.844 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:11.102 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.102 00:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:11.361 00:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:11.361 00:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:11.619 00:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:11.878 00:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:12.812 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:12.812 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:12.812 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.812 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:13.071 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.071 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:13.071 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.071 00:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:13.676 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.676 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:13.676 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.676 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:13.934 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.934 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:13.934 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.934 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:14.193 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.193 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:14.193 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:14.193 00:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.451 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.451 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:14.451 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.451 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:14.711 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.711 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:14.711 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:14.969 00:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:15.227 00:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.603 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:16.862 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.862 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:16.862 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.862 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:17.121 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.121 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:17.121 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.121 00:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:17.379 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.379 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:17.379 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.379 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:17.637 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.637 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:17.637 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.637 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:17.894 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.894 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:17.894 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:18.151 00:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:18.408 00:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:19.345 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:19.345 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:19.345 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.345 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:19.910 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.911 00:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.477 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.477 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:20.477 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.477 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:20.478 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.478 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:20.478 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.478 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:20.735 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.735 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:20.735 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:20.735 00:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.301 00:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.301 00:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:21.301 00:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:21.559 00:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:21.819 00:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:22.754 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:22.754 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:22.754 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.754 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:23.015 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.015 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:23.015 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.015 00:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:23.275 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.275 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:23.275 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.275 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:23.533 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.533 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:23.533 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.533 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:23.791 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.791 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:23.791 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:23.792 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.050 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.050 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:24.050 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:24.050 00:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 100326 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100326 ']' 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100326 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100326 00:30:24.309 killing process with pid 100326 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100326' 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100326 00:30:24.309 00:49:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100326 00:30:25.243 Connection closed with partial response: 00:30:25.243 00:30:25.243 00:30:25.810 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 100326 00:30:25.810 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:25.810 [2024-07-12 00:48:52.890340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:25.810 [2024-07-12 00:48:52.890614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100326 ] 00:30:25.810 [2024-07-12 00:48:53.059602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.810 [2024-07-12 00:48:53.342583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.810 Running I/O for 90 seconds... 00:30:25.810 [2024-07-12 00:49:09.878330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.878958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.878980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.879971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.879993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.880025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.880047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.880094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.880118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.880780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.880843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.880883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.810 [2024-07-12 00:49:09.880906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.880947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.880969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.881969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.881991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.810 [2024-07-12 00:49:09.882853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:25.810 [2024-07-12 00:49:09.882889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.882913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.883942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.883966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.884964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.884988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:09.885135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.885939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.885963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.886947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:09.887807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:09.887831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.811 [2024-07-12 00:49:26.528064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.528154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:26.528185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.528222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:26.528247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.528281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:26.528306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.528340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.811 [2024-07-12 00:49:26.528408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:25.811 [2024-07-12 00:49:26.528448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.528960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.528983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.529528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.529585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.529642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.529736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.529760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.812 [2024-07-12 00:49:26.531719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.531971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.812 [2024-07-12 00:49:26.532863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.812 [2024-07-12 00:49:26.532887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.533618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.533688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.533748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.533806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.533871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.813 [2024-07-12 00:49:26.533928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.533962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.813 [2024-07-12 00:49:26.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.813 [2024-07-12 00:49:26.534073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.813 [2024-07-12 00:49:26.534129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.534186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.534243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.534300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.534356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.813 [2024-07-12 00:49:26.534402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.813 [2024-07-12 00:49:26.534427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.813 Received shutdown signal, test time was about 34.262218 seconds 00:30:25.813 00:30:25.813 Latency(us) 00:30:25.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.813 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:25.813 Verification LBA range: start 0x0 length 0x4000 00:30:25.813 Nvme0n1 : 34.26 5986.12 23.38 0.00 0.00 21344.60 796.86 4026531.84 00:30:25.813 =================================================================================================================== 00:30:25.813 Total : 5986.12 23.38 0.00 0.00 21344.60 796.86 4026531.84 00:30:25.813 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.071 rmmod nvme_tcp 00:30:26.071 rmmod nvme_fabrics 00:30:26.071 rmmod nvme_keyring 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 100218 ']' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 100218 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100218 ']' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100218 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100218 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:26.071 killing process with pid 100218 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100218' 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100218 00:30:26.071 00:49:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100218 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:27.444 00:30:27.444 real 0m42.884s 00:30:27.444 user 2m18.130s 00:30:27.444 sys 0m9.648s 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.444 00:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:27.444 ************************************ 00:30:27.444 END TEST nvmf_host_multipath_status 00:30:27.444 ************************************ 00:30:27.444 00:49:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:27.444 00:49:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:27.444 00:49:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:27.444 00:49:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.444 00:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.444 ************************************ 00:30:27.444 START TEST nvmf_discovery_remove_ifc 00:30:27.444 ************************************ 00:30:27.444 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:27.703 * Looking for test storage... 00:30:27.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.703 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:27.704 Cannot find device "nvmf_tgt_br" 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:27.704 Cannot find device "nvmf_tgt_br2" 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:27.704 Cannot find device "nvmf_tgt_br" 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:27.704 Cannot find device "nvmf_tgt_br2" 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:27.704 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:27.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:27.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:27.963 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:28.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:30:28.229 00:30:28.229 --- 10.0.0.2 ping statistics --- 00:30:28.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.229 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:28.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:28.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:30:28.229 00:30:28.229 --- 10.0.0.3 ping statistics --- 00:30:28.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.229 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:28.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:30:28.229 00:30:28.229 --- 10.0.0.1 ping statistics --- 00:30:28.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.229 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=101640 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 101640 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 101640 ']' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:28.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:28.229 00:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:28.229 [2024-07-12 00:49:33.099971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:28.229 [2024-07-12 00:49:33.100161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.547 [2024-07-12 00:49:33.282190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.805 [2024-07-12 00:49:33.553291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.805 [2024-07-12 00:49:33.553374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.805 [2024-07-12 00:49:33.553404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.805 [2024-07-12 00:49:33.553436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.805 [2024-07-12 00:49:33.553449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.805 [2024-07-12 00:49:33.553520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 [2024-07-12 00:49:34.074869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.372 [2024-07-12 00:49:34.083037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:29.372 null0 00:30:29.372 [2024-07-12 00:49:34.115028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=101696 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 101696 /tmp/host.sock 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 101696 ']' 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.372 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.372 00:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:29.372 [2024-07-12 00:49:34.262199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:29.372 [2024-07-12 00:49:34.262383] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101696 ] 00:30:29.630 [2024-07-12 00:49:34.440727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.888 [2024-07-12 00:49:34.728610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.454 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:30.713 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.713 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:30.713 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.713 00:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:31.649 [2024-07-12 00:49:36.530187] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:31.649 [2024-07-12 00:49:36.530254] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:31.649 [2024-07-12 00:49:36.530289] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:31.908 [2024-07-12 00:49:36.617478] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:31.908 [2024-07-12 00:49:36.681572] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:31.908 [2024-07-12 00:49:36.681690] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:31.908 [2024-07-12 00:49:36.681760] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:31.908 [2024-07-12 00:49:36.681793] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:31.908 [2024-07-12 00:49:36.681835] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:31.908 [2024-07-12 00:49:36.688494] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:31.908 00:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:33.333 00:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:34.268 00:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:35.202 00:49:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:36.135 00:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:36.135 00:49:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.508 [2024-07-12 00:49:42.109437] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:37.508 [2024-07-12 00:49:42.109565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.508 [2024-07-12 00:49:42.109591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.508 [2024-07-12 00:49:42.109612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.508 [2024-07-12 00:49:42.109627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.508 [2024-07-12 00:49:42.109642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.508 [2024-07-12 00:49:42.109656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.508 [2024-07-12 00:49:42.109671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.508 [2024-07-12 00:49:42.109684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.508 [2024-07-12 00:49:42.109699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.508 [2024-07-12 00:49:42.109713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.508 [2024-07-12 00:49:42.109726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:37.508 00:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:37.508 [2024-07-12 00:49:42.119430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:30:37.508 [2024-07-12 00:49:42.129454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.441 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:38.441 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:38.442 [2024-07-12 00:49:43.168574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:38.442 [2024-07-12 00:49:43.168729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:30:38.442 [2024-07-12 00:49:43.168781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:30:38.442 [2024-07-12 00:49:43.168889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:30:38.442 [2024-07-12 00:49:43.170303] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:38.442 [2024-07-12 00:49:43.170477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.442 [2024-07-12 00:49:43.170517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.442 [2024-07-12 00:49:43.170549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.442 [2024-07-12 00:49:43.170646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.442 [2024-07-12 00:49:43.170685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:38.442 00:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:39.375 [2024-07-12 00:49:44.170789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:39.375 [2024-07-12 00:49:44.170873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:39.375 [2024-07-12 00:49:44.170892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:39.375 [2024-07-12 00:49:44.170909] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:39.375 [2024-07-12 00:49:44.170947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:39.375 [2024-07-12 00:49:44.171020] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:39.375 [2024-07-12 00:49:44.171094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.375 [2024-07-12 00:49:44.171118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.375 [2024-07-12 00:49:44.171139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.375 [2024-07-12 00:49:44.171153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.376 [2024-07-12 00:49:44.171168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.376 [2024-07-12 00:49:44.171182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.376 [2024-07-12 00:49:44.171197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.376 [2024-07-12 00:49:44.171211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.376 [2024-07-12 00:49:44.171226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.376 [2024-07-12 00:49:44.171239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.376 [2024-07-12 00:49:44.171252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:39.376 [2024-07-12 00:49:44.171321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:30:39.376 [2024-07-12 00:49:44.172302] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:39.376 [2024-07-12 00:49:44.172337] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:39.376 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.634 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:39.634 00:49:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:40.570 00:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:41.505 [2024-07-12 00:49:46.183511] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:41.505 [2024-07-12 00:49:46.183589] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:41.505 [2024-07-12 00:49:46.183646] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:41.505 [2024-07-12 00:49:46.269720] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:41.505 [2024-07-12 00:49:46.334726] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:41.505 [2024-07-12 00:49:46.334868] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:41.505 [2024-07-12 00:49:46.334940] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:41.505 [2024-07-12 00:49:46.334970] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:41.505 [2024-07-12 00:49:46.334988] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:41.505 [2024-07-12 00:49:46.341836] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:41.505 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 101696 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 101696 ']' 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 101696 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101696 00:30:41.762 killing process with pid 101696 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101696' 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 101696 00:30:41.762 00:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 101696 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:43.135 rmmod nvme_tcp 00:30:43.135 rmmod nvme_fabrics 00:30:43.135 rmmod nvme_keyring 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 101640 ']' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 101640 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 101640 ']' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 101640 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101640 00:30:43.135 killing process with pid 101640 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101640' 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 101640 00:30:43.135 00:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 101640 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:44.543 00:30:44.543 real 0m16.884s 00:30:44.543 user 0m29.050s 00:30:44.543 sys 0m2.003s 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.543 00:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:44.543 ************************************ 00:30:44.543 END TEST nvmf_discovery_remove_ifc 00:30:44.543 ************************************ 00:30:44.543 00:49:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:44.543 00:49:49 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:44.543 00:49:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:44.543 00:49:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.543 00:49:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:44.543 ************************************ 00:30:44.543 START TEST nvmf_identify_kernel_target 00:30:44.543 ************************************ 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:44.543 * Looking for test storage... 00:30:44.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.543 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:44.544 Cannot find device "nvmf_tgt_br" 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:44.544 Cannot find device "nvmf_tgt_br2" 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:44.544 Cannot find device "nvmf_tgt_br" 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:44.544 Cannot find device "nvmf_tgt_br2" 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:30:44.544 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:44.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:44.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:44.803 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:44.804 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:45.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:30:45.063 00:30:45.063 --- 10.0.0.2 ping statistics --- 00:30:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.063 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:45.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:45.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:30:45.063 00:30:45.063 --- 10.0.0.3 ping statistics --- 00:30:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.063 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:45.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:30:45.063 00:30:45.063 --- 10.0.0.1 ping statistics --- 00:30:45.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.063 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:45.063 00:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:45.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.322 Waiting for block devices as requested 00:30:45.581 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.581 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:45.581 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:45.582 No valid GPT data, bailing 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:45.582 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:45.841 No valid GPT data, bailing 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:45.841 No valid GPT data, bailing 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:45.841 No valid GPT data, bailing 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:30:45.841 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:30:45.842 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:46.131 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.1 -t tcp -s 4420 00:30:46.131 00:30:46.131 Discovery Log Number of Records 2, Generation counter 2 00:30:46.131 =====Discovery Log Entry 0====== 00:30:46.131 trtype: tcp 00:30:46.131 adrfam: ipv4 00:30:46.131 subtype: current discovery subsystem 00:30:46.131 treq: not specified, sq flow control disable supported 00:30:46.131 portid: 1 00:30:46.131 trsvcid: 4420 00:30:46.131 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:46.131 traddr: 10.0.0.1 00:30:46.131 eflags: none 00:30:46.131 sectype: none 00:30:46.131 =====Discovery Log Entry 1====== 00:30:46.131 trtype: tcp 00:30:46.131 adrfam: ipv4 00:30:46.131 subtype: nvme subsystem 00:30:46.131 treq: not specified, sq flow control disable supported 00:30:46.131 portid: 1 00:30:46.131 trsvcid: 4420 00:30:46.131 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:46.131 traddr: 10.0.0.1 00:30:46.131 eflags: none 00:30:46.131 sectype: none 00:30:46.131 00:49:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:46.131 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:46.131 ===================================================== 00:30:46.131 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:46.131 ===================================================== 00:30:46.131 Controller Capabilities/Features 00:30:46.131 ================================ 00:30:46.131 Vendor ID: 0000 00:30:46.131 Subsystem Vendor ID: 0000 00:30:46.131 Serial Number: 36d18f750505f6e214b2 00:30:46.131 Model Number: Linux 00:30:46.131 Firmware Version: 6.7.0-68 00:30:46.131 Recommended Arb Burst: 0 00:30:46.131 IEEE OUI Identifier: 00 00 00 00:30:46.131 Multi-path I/O 00:30:46.131 May have multiple subsystem ports: No 00:30:46.131 May have multiple controllers: No 00:30:46.131 Associated with SR-IOV VF: No 00:30:46.131 Max Data Transfer Size: Unlimited 00:30:46.131 Max Number of Namespaces: 0 00:30:46.131 Max Number of I/O Queues: 1024 00:30:46.131 NVMe Specification Version (VS): 1.3 00:30:46.131 NVMe Specification Version (Identify): 1.3 00:30:46.131 Maximum Queue Entries: 1024 00:30:46.131 Contiguous Queues Required: No 00:30:46.131 Arbitration Mechanisms Supported 00:30:46.131 Weighted Round Robin: Not Supported 00:30:46.131 Vendor Specific: Not Supported 00:30:46.132 Reset Timeout: 7500 ms 00:30:46.132 Doorbell Stride: 4 bytes 00:30:46.132 NVM Subsystem Reset: Not Supported 00:30:46.132 Command Sets Supported 00:30:46.132 NVM Command Set: Supported 00:30:46.132 Boot Partition: Not Supported 00:30:46.132 Memory Page Size Minimum: 4096 bytes 00:30:46.132 Memory Page Size Maximum: 4096 bytes 00:30:46.132 Persistent Memory Region: Not Supported 00:30:46.132 Optional Asynchronous Events Supported 00:30:46.132 Namespace Attribute Notices: Not Supported 00:30:46.132 Firmware Activation Notices: Not Supported 00:30:46.132 ANA Change Notices: Not Supported 00:30:46.132 PLE Aggregate Log Change Notices: Not Supported 00:30:46.132 LBA Status Info Alert Notices: Not Supported 00:30:46.132 EGE Aggregate Log Change Notices: Not Supported 00:30:46.132 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.132 Zone Descriptor Change Notices: Not Supported 00:30:46.132 Discovery Log Change Notices: Supported 00:30:46.132 Controller Attributes 00:30:46.132 128-bit Host Identifier: Not Supported 00:30:46.132 Non-Operational Permissive Mode: Not Supported 00:30:46.132 NVM Sets: Not Supported 00:30:46.132 Read Recovery Levels: Not Supported 00:30:46.132 Endurance Groups: Not Supported 00:30:46.132 Predictable Latency Mode: Not Supported 00:30:46.132 Traffic Based Keep ALive: Not Supported 00:30:46.132 Namespace Granularity: Not Supported 00:30:46.132 SQ Associations: Not Supported 00:30:46.132 UUID List: Not Supported 00:30:46.132 Multi-Domain Subsystem: Not Supported 00:30:46.132 Fixed Capacity Management: Not Supported 00:30:46.132 Variable Capacity Management: Not Supported 00:30:46.132 Delete Endurance Group: Not Supported 00:30:46.132 Delete NVM Set: Not Supported 00:30:46.132 Extended LBA Formats Supported: Not Supported 00:30:46.132 Flexible Data Placement Supported: Not Supported 00:30:46.132 00:30:46.132 Controller Memory Buffer Support 00:30:46.132 ================================ 00:30:46.132 Supported: No 00:30:46.132 00:30:46.132 Persistent Memory Region Support 00:30:46.132 ================================ 00:30:46.132 Supported: No 00:30:46.132 00:30:46.132 Admin Command Set Attributes 00:30:46.132 ============================ 00:30:46.132 Security Send/Receive: Not Supported 00:30:46.132 Format NVM: Not Supported 00:30:46.132 Firmware Activate/Download: Not Supported 00:30:46.132 Namespace Management: Not Supported 00:30:46.132 Device Self-Test: Not Supported 00:30:46.132 Directives: Not Supported 00:30:46.132 NVMe-MI: Not Supported 00:30:46.132 Virtualization Management: Not Supported 00:30:46.132 Doorbell Buffer Config: Not Supported 00:30:46.132 Get LBA Status Capability: Not Supported 00:30:46.132 Command & Feature Lockdown Capability: Not Supported 00:30:46.132 Abort Command Limit: 1 00:30:46.132 Async Event Request Limit: 1 00:30:46.132 Number of Firmware Slots: N/A 00:30:46.132 Firmware Slot 1 Read-Only: N/A 00:30:46.392 Firmware Activation Without Reset: N/A 00:30:46.392 Multiple Update Detection Support: N/A 00:30:46.392 Firmware Update Granularity: No Information Provided 00:30:46.392 Per-Namespace SMART Log: No 00:30:46.392 Asymmetric Namespace Access Log Page: Not Supported 00:30:46.392 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:46.392 Command Effects Log Page: Not Supported 00:30:46.392 Get Log Page Extended Data: Supported 00:30:46.392 Telemetry Log Pages: Not Supported 00:30:46.392 Persistent Event Log Pages: Not Supported 00:30:46.392 Supported Log Pages Log Page: May Support 00:30:46.392 Commands Supported & Effects Log Page: Not Supported 00:30:46.392 Feature Identifiers & Effects Log Page:May Support 00:30:46.392 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.392 Data Area 4 for Telemetry Log: Not Supported 00:30:46.392 Error Log Page Entries Supported: 1 00:30:46.392 Keep Alive: Not Supported 00:30:46.392 00:30:46.392 NVM Command Set Attributes 00:30:46.392 ========================== 00:30:46.392 Submission Queue Entry Size 00:30:46.392 Max: 1 00:30:46.392 Min: 1 00:30:46.392 Completion Queue Entry Size 00:30:46.392 Max: 1 00:30:46.392 Min: 1 00:30:46.392 Number of Namespaces: 0 00:30:46.392 Compare Command: Not Supported 00:30:46.392 Write Uncorrectable Command: Not Supported 00:30:46.392 Dataset Management Command: Not Supported 00:30:46.392 Write Zeroes Command: Not Supported 00:30:46.392 Set Features Save Field: Not Supported 00:30:46.392 Reservations: Not Supported 00:30:46.392 Timestamp: Not Supported 00:30:46.392 Copy: Not Supported 00:30:46.392 Volatile Write Cache: Not Present 00:30:46.392 Atomic Write Unit (Normal): 1 00:30:46.392 Atomic Write Unit (PFail): 1 00:30:46.392 Atomic Compare & Write Unit: 1 00:30:46.392 Fused Compare & Write: Not Supported 00:30:46.392 Scatter-Gather List 00:30:46.392 SGL Command Set: Supported 00:30:46.392 SGL Keyed: Not Supported 00:30:46.392 SGL Bit Bucket Descriptor: Not Supported 00:30:46.392 SGL Metadata Pointer: Not Supported 00:30:46.392 Oversized SGL: Not Supported 00:30:46.392 SGL Metadata Address: Not Supported 00:30:46.392 SGL Offset: Supported 00:30:46.392 Transport SGL Data Block: Not Supported 00:30:46.392 Replay Protected Memory Block: Not Supported 00:30:46.392 00:30:46.392 Firmware Slot Information 00:30:46.392 ========================= 00:30:46.392 Active slot: 0 00:30:46.392 00:30:46.392 00:30:46.392 Error Log 00:30:46.392 ========= 00:30:46.392 00:30:46.392 Active Namespaces 00:30:46.392 ================= 00:30:46.392 Discovery Log Page 00:30:46.392 ================== 00:30:46.392 Generation Counter: 2 00:30:46.392 Number of Records: 2 00:30:46.393 Record Format: 0 00:30:46.393 00:30:46.393 Discovery Log Entry 0 00:30:46.393 ---------------------- 00:30:46.393 Transport Type: 3 (TCP) 00:30:46.393 Address Family: 1 (IPv4) 00:30:46.393 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:46.393 Entry Flags: 00:30:46.393 Duplicate Returned Information: 0 00:30:46.393 Explicit Persistent Connection Support for Discovery: 0 00:30:46.393 Transport Requirements: 00:30:46.393 Secure Channel: Not Specified 00:30:46.393 Port ID: 1 (0x0001) 00:30:46.393 Controller ID: 65535 (0xffff) 00:30:46.393 Admin Max SQ Size: 32 00:30:46.393 Transport Service Identifier: 4420 00:30:46.393 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:46.393 Transport Address: 10.0.0.1 00:30:46.393 Discovery Log Entry 1 00:30:46.393 ---------------------- 00:30:46.393 Transport Type: 3 (TCP) 00:30:46.393 Address Family: 1 (IPv4) 00:30:46.393 Subsystem Type: 2 (NVM Subsystem) 00:30:46.393 Entry Flags: 00:30:46.393 Duplicate Returned Information: 0 00:30:46.393 Explicit Persistent Connection Support for Discovery: 0 00:30:46.393 Transport Requirements: 00:30:46.393 Secure Channel: Not Specified 00:30:46.393 Port ID: 1 (0x0001) 00:30:46.393 Controller ID: 65535 (0xffff) 00:30:46.393 Admin Max SQ Size: 32 00:30:46.393 Transport Service Identifier: 4420 00:30:46.393 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:46.393 Transport Address: 10.0.0.1 00:30:46.393 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:46.393 get_feature(0x01) failed 00:30:46.393 get_feature(0x02) failed 00:30:46.393 get_feature(0x04) failed 00:30:46.393 ===================================================== 00:30:46.393 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:46.393 ===================================================== 00:30:46.393 Controller Capabilities/Features 00:30:46.393 ================================ 00:30:46.393 Vendor ID: 0000 00:30:46.393 Subsystem Vendor ID: 0000 00:30:46.393 Serial Number: de4a35a9b46b326be2c3 00:30:46.393 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:46.393 Firmware Version: 6.7.0-68 00:30:46.393 Recommended Arb Burst: 6 00:30:46.393 IEEE OUI Identifier: 00 00 00 00:30:46.393 Multi-path I/O 00:30:46.393 May have multiple subsystem ports: Yes 00:30:46.393 May have multiple controllers: Yes 00:30:46.393 Associated with SR-IOV VF: No 00:30:46.393 Max Data Transfer Size: Unlimited 00:30:46.393 Max Number of Namespaces: 1024 00:30:46.393 Max Number of I/O Queues: 128 00:30:46.393 NVMe Specification Version (VS): 1.3 00:30:46.393 NVMe Specification Version (Identify): 1.3 00:30:46.393 Maximum Queue Entries: 1024 00:30:46.393 Contiguous Queues Required: No 00:30:46.393 Arbitration Mechanisms Supported 00:30:46.393 Weighted Round Robin: Not Supported 00:30:46.393 Vendor Specific: Not Supported 00:30:46.393 Reset Timeout: 7500 ms 00:30:46.393 Doorbell Stride: 4 bytes 00:30:46.393 NVM Subsystem Reset: Not Supported 00:30:46.393 Command Sets Supported 00:30:46.393 NVM Command Set: Supported 00:30:46.393 Boot Partition: Not Supported 00:30:46.393 Memory Page Size Minimum: 4096 bytes 00:30:46.393 Memory Page Size Maximum: 4096 bytes 00:30:46.393 Persistent Memory Region: Not Supported 00:30:46.393 Optional Asynchronous Events Supported 00:30:46.393 Namespace Attribute Notices: Supported 00:30:46.393 Firmware Activation Notices: Not Supported 00:30:46.393 ANA Change Notices: Supported 00:30:46.393 PLE Aggregate Log Change Notices: Not Supported 00:30:46.393 LBA Status Info Alert Notices: Not Supported 00:30:46.393 EGE Aggregate Log Change Notices: Not Supported 00:30:46.393 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.393 Zone Descriptor Change Notices: Not Supported 00:30:46.393 Discovery Log Change Notices: Not Supported 00:30:46.393 Controller Attributes 00:30:46.393 128-bit Host Identifier: Supported 00:30:46.393 Non-Operational Permissive Mode: Not Supported 00:30:46.393 NVM Sets: Not Supported 00:30:46.393 Read Recovery Levels: Not Supported 00:30:46.393 Endurance Groups: Not Supported 00:30:46.393 Predictable Latency Mode: Not Supported 00:30:46.393 Traffic Based Keep ALive: Supported 00:30:46.393 Namespace Granularity: Not Supported 00:30:46.393 SQ Associations: Not Supported 00:30:46.393 UUID List: Not Supported 00:30:46.393 Multi-Domain Subsystem: Not Supported 00:30:46.393 Fixed Capacity Management: Not Supported 00:30:46.393 Variable Capacity Management: Not Supported 00:30:46.393 Delete Endurance Group: Not Supported 00:30:46.393 Delete NVM Set: Not Supported 00:30:46.393 Extended LBA Formats Supported: Not Supported 00:30:46.393 Flexible Data Placement Supported: Not Supported 00:30:46.393 00:30:46.393 Controller Memory Buffer Support 00:30:46.393 ================================ 00:30:46.393 Supported: No 00:30:46.393 00:30:46.393 Persistent Memory Region Support 00:30:46.393 ================================ 00:30:46.393 Supported: No 00:30:46.393 00:30:46.393 Admin Command Set Attributes 00:30:46.393 ============================ 00:30:46.393 Security Send/Receive: Not Supported 00:30:46.393 Format NVM: Not Supported 00:30:46.393 Firmware Activate/Download: Not Supported 00:30:46.393 Namespace Management: Not Supported 00:30:46.393 Device Self-Test: Not Supported 00:30:46.393 Directives: Not Supported 00:30:46.393 NVMe-MI: Not Supported 00:30:46.393 Virtualization Management: Not Supported 00:30:46.393 Doorbell Buffer Config: Not Supported 00:30:46.393 Get LBA Status Capability: Not Supported 00:30:46.393 Command & Feature Lockdown Capability: Not Supported 00:30:46.393 Abort Command Limit: 4 00:30:46.393 Async Event Request Limit: 4 00:30:46.393 Number of Firmware Slots: N/A 00:30:46.393 Firmware Slot 1 Read-Only: N/A 00:30:46.393 Firmware Activation Without Reset: N/A 00:30:46.393 Multiple Update Detection Support: N/A 00:30:46.393 Firmware Update Granularity: No Information Provided 00:30:46.393 Per-Namespace SMART Log: Yes 00:30:46.393 Asymmetric Namespace Access Log Page: Supported 00:30:46.393 ANA Transition Time : 10 sec 00:30:46.393 00:30:46.393 Asymmetric Namespace Access Capabilities 00:30:46.393 ANA Optimized State : Supported 00:30:46.393 ANA Non-Optimized State : Supported 00:30:46.393 ANA Inaccessible State : Supported 00:30:46.393 ANA Persistent Loss State : Supported 00:30:46.393 ANA Change State : Supported 00:30:46.393 ANAGRPID is not changed : No 00:30:46.393 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:46.393 00:30:46.393 ANA Group Identifier Maximum : 128 00:30:46.393 Number of ANA Group Identifiers : 128 00:30:46.393 Max Number of Allowed Namespaces : 1024 00:30:46.393 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:46.393 Command Effects Log Page: Supported 00:30:46.393 Get Log Page Extended Data: Supported 00:30:46.393 Telemetry Log Pages: Not Supported 00:30:46.394 Persistent Event Log Pages: Not Supported 00:30:46.394 Supported Log Pages Log Page: May Support 00:30:46.394 Commands Supported & Effects Log Page: Not Supported 00:30:46.394 Feature Identifiers & Effects Log Page:May Support 00:30:46.394 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.394 Data Area 4 for Telemetry Log: Not Supported 00:30:46.394 Error Log Page Entries Supported: 128 00:30:46.394 Keep Alive: Supported 00:30:46.394 Keep Alive Granularity: 1000 ms 00:30:46.394 00:30:46.394 NVM Command Set Attributes 00:30:46.394 ========================== 00:30:46.394 Submission Queue Entry Size 00:30:46.394 Max: 64 00:30:46.394 Min: 64 00:30:46.394 Completion Queue Entry Size 00:30:46.394 Max: 16 00:30:46.394 Min: 16 00:30:46.394 Number of Namespaces: 1024 00:30:46.394 Compare Command: Not Supported 00:30:46.394 Write Uncorrectable Command: Not Supported 00:30:46.394 Dataset Management Command: Supported 00:30:46.394 Write Zeroes Command: Supported 00:30:46.394 Set Features Save Field: Not Supported 00:30:46.394 Reservations: Not Supported 00:30:46.394 Timestamp: Not Supported 00:30:46.394 Copy: Not Supported 00:30:46.394 Volatile Write Cache: Present 00:30:46.394 Atomic Write Unit (Normal): 1 00:30:46.394 Atomic Write Unit (PFail): 1 00:30:46.394 Atomic Compare & Write Unit: 1 00:30:46.394 Fused Compare & Write: Not Supported 00:30:46.394 Scatter-Gather List 00:30:46.394 SGL Command Set: Supported 00:30:46.394 SGL Keyed: Not Supported 00:30:46.394 SGL Bit Bucket Descriptor: Not Supported 00:30:46.394 SGL Metadata Pointer: Not Supported 00:30:46.394 Oversized SGL: Not Supported 00:30:46.394 SGL Metadata Address: Not Supported 00:30:46.394 SGL Offset: Supported 00:30:46.394 Transport SGL Data Block: Not Supported 00:30:46.394 Replay Protected Memory Block: Not Supported 00:30:46.394 00:30:46.394 Firmware Slot Information 00:30:46.394 ========================= 00:30:46.394 Active slot: 0 00:30:46.394 00:30:46.394 Asymmetric Namespace Access 00:30:46.394 =========================== 00:30:46.394 Change Count : 0 00:30:46.394 Number of ANA Group Descriptors : 1 00:30:46.394 ANA Group Descriptor : 0 00:30:46.394 ANA Group ID : 1 00:30:46.394 Number of NSID Values : 1 00:30:46.394 Change Count : 0 00:30:46.394 ANA State : 1 00:30:46.394 Namespace Identifier : 1 00:30:46.394 00:30:46.394 Commands Supported and Effects 00:30:46.394 ============================== 00:30:46.394 Admin Commands 00:30:46.394 -------------- 00:30:46.394 Get Log Page (02h): Supported 00:30:46.394 Identify (06h): Supported 00:30:46.394 Abort (08h): Supported 00:30:46.394 Set Features (09h): Supported 00:30:46.394 Get Features (0Ah): Supported 00:30:46.394 Asynchronous Event Request (0Ch): Supported 00:30:46.394 Keep Alive (18h): Supported 00:30:46.394 I/O Commands 00:30:46.394 ------------ 00:30:46.394 Flush (00h): Supported 00:30:46.394 Write (01h): Supported LBA-Change 00:30:46.394 Read (02h): Supported 00:30:46.394 Write Zeroes (08h): Supported LBA-Change 00:30:46.394 Dataset Management (09h): Supported 00:30:46.394 00:30:46.394 Error Log 00:30:46.394 ========= 00:30:46.394 Entry: 0 00:30:46.394 Error Count: 0x3 00:30:46.394 Submission Queue Id: 0x0 00:30:46.394 Command Id: 0x5 00:30:46.394 Phase Bit: 0 00:30:46.394 Status Code: 0x2 00:30:46.394 Status Code Type: 0x0 00:30:46.394 Do Not Retry: 1 00:30:46.653 Error Location: 0x28 00:30:46.653 LBA: 0x0 00:30:46.653 Namespace: 0x0 00:30:46.653 Vendor Log Page: 0x0 00:30:46.653 ----------- 00:30:46.653 Entry: 1 00:30:46.653 Error Count: 0x2 00:30:46.653 Submission Queue Id: 0x0 00:30:46.653 Command Id: 0x5 00:30:46.653 Phase Bit: 0 00:30:46.653 Status Code: 0x2 00:30:46.653 Status Code Type: 0x0 00:30:46.653 Do Not Retry: 1 00:30:46.653 Error Location: 0x28 00:30:46.653 LBA: 0x0 00:30:46.653 Namespace: 0x0 00:30:46.653 Vendor Log Page: 0x0 00:30:46.653 ----------- 00:30:46.653 Entry: 2 00:30:46.653 Error Count: 0x1 00:30:46.653 Submission Queue Id: 0x0 00:30:46.653 Command Id: 0x4 00:30:46.653 Phase Bit: 0 00:30:46.653 Status Code: 0x2 00:30:46.653 Status Code Type: 0x0 00:30:46.653 Do Not Retry: 1 00:30:46.653 Error Location: 0x28 00:30:46.653 LBA: 0x0 00:30:46.653 Namespace: 0x0 00:30:46.653 Vendor Log Page: 0x0 00:30:46.653 00:30:46.653 Number of Queues 00:30:46.653 ================ 00:30:46.653 Number of I/O Submission Queues: 128 00:30:46.653 Number of I/O Completion Queues: 128 00:30:46.653 00:30:46.653 ZNS Specific Controller Data 00:30:46.653 ============================ 00:30:46.653 Zone Append Size Limit: 0 00:30:46.653 00:30:46.653 00:30:46.653 Active Namespaces 00:30:46.653 ================= 00:30:46.653 get_feature(0x05) failed 00:30:46.653 Namespace ID:1 00:30:46.653 Command Set Identifier: NVM (00h) 00:30:46.653 Deallocate: Supported 00:30:46.653 Deallocated/Unwritten Error: Not Supported 00:30:46.653 Deallocated Read Value: Unknown 00:30:46.653 Deallocate in Write Zeroes: Not Supported 00:30:46.653 Deallocated Guard Field: 0xFFFF 00:30:46.653 Flush: Supported 00:30:46.653 Reservation: Not Supported 00:30:46.653 Namespace Sharing Capabilities: Multiple Controllers 00:30:46.653 Size (in LBAs): 1310720 (5GiB) 00:30:46.653 Capacity (in LBAs): 1310720 (5GiB) 00:30:46.653 Utilization (in LBAs): 1310720 (5GiB) 00:30:46.653 UUID: ef8f24ff-abdb-4d53-a558-5e84c2ad53a4 00:30:46.654 Thin Provisioning: Not Supported 00:30:46.654 Per-NS Atomic Units: Yes 00:30:46.654 Atomic Boundary Size (Normal): 0 00:30:46.654 Atomic Boundary Size (PFail): 0 00:30:46.654 Atomic Boundary Offset: 0 00:30:46.654 NGUID/EUI64 Never Reused: No 00:30:46.654 ANA group ID: 1 00:30:46.654 Namespace Write Protected: No 00:30:46.654 Number of LBA Formats: 1 00:30:46.654 Current LBA Format: LBA Format #00 00:30:46.654 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:30:46.654 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.654 rmmod nvme_tcp 00:30:46.654 rmmod nvme_fabrics 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:46.654 00:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:47.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:47.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:47.592 ************************************ 00:30:47.592 END TEST nvmf_identify_kernel_target 00:30:47.592 ************************************ 00:30:47.592 00:30:47.592 real 0m3.168s 00:30:47.592 user 0m1.094s 00:30:47.592 sys 0m1.513s 00:30:47.592 00:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.592 00:49:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:47.592 00:49:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:47.592 00:49:52 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:47.592 00:49:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:47.592 00:49:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.592 00:49:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.592 ************************************ 00:30:47.592 START TEST nvmf_auth_host 00:30:47.592 ************************************ 00:30:47.592 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:47.851 * Looking for test storage... 00:30:47.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:47.851 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:47.852 Cannot find device "nvmf_tgt_br" 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:47.852 Cannot find device "nvmf_tgt_br2" 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:47.852 Cannot find device "nvmf_tgt_br" 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:47.852 Cannot find device "nvmf_tgt_br2" 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:47.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:47.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:47.852 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:48.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:30:48.111 00:30:48.111 --- 10.0.0.2 ping statistics --- 00:30:48.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.111 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:48.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:48.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:30:48.111 00:30:48.111 --- 10.0.0.3 ping statistics --- 00:30:48.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.111 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:48.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:30:48.111 00:30:48.111 --- 10.0.0.1 ping statistics --- 00:30:48.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.111 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.111 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=102617 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 102617 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 102617 ']' 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.112 00:49:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5acf965f58f30d3b650bbc3aae1bd6e2 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.S33 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5acf965f58f30d3b650bbc3aae1bd6e2 0 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5acf965f58f30d3b650bbc3aae1bd6e2 0 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.485 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5acf965f58f30d3b650bbc3aae1bd6e2 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.S33 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.S33 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.S33 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1d2ce406dc1f8ef452a10c44259934f98f41478e44f33044000ce0649f6b968e 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.VWI 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1d2ce406dc1f8ef452a10c44259934f98f41478e44f33044000ce0649f6b968e 3 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1d2ce406dc1f8ef452a10c44259934f98f41478e44f33044000ce0649f6b968e 3 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1d2ce406dc1f8ef452a10c44259934f98f41478e44f33044000ce0649f6b968e 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.VWI 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.VWI 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.VWI 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c536c0aef562abace642ea2132a3e823eba08d8c0491262 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.afY 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c536c0aef562abace642ea2132a3e823eba08d8c0491262 0 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c536c0aef562abace642ea2132a3e823eba08d8c0491262 0 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c536c0aef562abace642ea2132a3e823eba08d8c0491262 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.afY 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.afY 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.afY 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cafeab6a8740da9c3fed3d052e02ac00dc1a34d7ccddfe37 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.x0f 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cafeab6a8740da9c3fed3d052e02ac00dc1a34d7ccddfe37 2 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cafeab6a8740da9c3fed3d052e02ac00dc1a34d7ccddfe37 2 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cafeab6a8740da9c3fed3d052e02ac00dc1a34d7ccddfe37 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.x0f 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.x0f 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.x0f 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0b53708c4d9ce3fb7455eff4ccb7882a 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OCf 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0b53708c4d9ce3fb7455eff4ccb7882a 1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0b53708c4d9ce3fb7455eff4ccb7882a 1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0b53708c4d9ce3fb7455eff4ccb7882a 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:30:49.486 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OCf 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OCf 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.OCf 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=695134bd8f07ebd6d21754b1322503c6 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eCO 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 695134bd8f07ebd6d21754b1322503c6 1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 695134bd8f07ebd6d21754b1322503c6 1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=695134bd8f07ebd6d21754b1322503c6 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eCO 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eCO 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eCO 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b141da5d1ef5234d3f14db8ab5c107880f19b50da6002d4a 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GuW 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b141da5d1ef5234d3f14db8ab5c107880f19b50da6002d4a 2 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b141da5d1ef5234d3f14db8ab5c107880f19b50da6002d4a 2 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b141da5d1ef5234d3f14db8ab5c107880f19b50da6002d4a 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GuW 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GuW 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GuW 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d83313fdd43d6b9d9fe647e4c0bc002 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.t9f 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d83313fdd43d6b9d9fe647e4c0bc002 0 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d83313fdd43d6b9d9fe647e4c0bc002 0 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d83313fdd43d6b9d9fe647e4c0bc002 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.t9f 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.t9f 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.t9f 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d3d97d2ab56df1b0bb4e488c32752ef31f39c4addca7b7b5e73e442d0eb9473 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kCa 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d3d97d2ab56df1b0bb4e488c32752ef31f39c4addca7b7b5e73e442d0eb9473 3 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d3d97d2ab56df1b0bb4e488c32752ef31f39c4addca7b7b5e73e442d0eb9473 3 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d3d97d2ab56df1b0bb4e488c32752ef31f39c4addca7b7b5e73e442d0eb9473 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:30:49.745 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kCa 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kCa 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kCa 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 102617 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 102617 ']' 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.004 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.262 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:50.262 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:30:50.262 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.S33 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.VWI ]] 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VWI 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.afY 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.x0f ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.x0f 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OCf 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eCO ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eCO 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GuW 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.t9f ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.t9f 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kCa 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:50.263 00:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:50.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.521 Waiting for block devices as requested 00:30:50.779 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.779 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:51.346 No valid GPT data, bailing 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:51.346 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:51.346 No valid GPT data, bailing 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:51.605 No valid GPT data, bailing 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:51.605 No valid GPT data, bailing 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.1 -t tcp -s 4420 00:30:51.605 00:30:51.605 Discovery Log Number of Records 2, Generation counter 2 00:30:51.605 =====Discovery Log Entry 0====== 00:30:51.605 trtype: tcp 00:30:51.605 adrfam: ipv4 00:30:51.605 subtype: current discovery subsystem 00:30:51.605 treq: not specified, sq flow control disable supported 00:30:51.605 portid: 1 00:30:51.605 trsvcid: 4420 00:30:51.605 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:51.605 traddr: 10.0.0.1 00:30:51.605 eflags: none 00:30:51.605 sectype: none 00:30:51.605 =====Discovery Log Entry 1====== 00:30:51.605 trtype: tcp 00:30:51.605 adrfam: ipv4 00:30:51.605 subtype: nvme subsystem 00:30:51.605 treq: not specified, sq flow control disable supported 00:30:51.605 portid: 1 00:30:51.605 trsvcid: 4420 00:30:51.605 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:51.605 traddr: 10.0.0.1 00:30:51.605 eflags: none 00:30:51.605 sectype: none 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.605 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.864 nvme0n1 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.864 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 nvme0n1 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.122 00:49:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:30:52.122 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.123 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 nvme0n1 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 nvme0n1 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.469 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.470 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 nvme0n1 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 nvme0n1 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.728 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.986 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.244 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.245 00:49:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 nvme0n1 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.245 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.503 nvme0n1 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.503 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.759 nvme0n1 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:30:53.759 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.760 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 nvme0n1 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 nvme0n1 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.017 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.274 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.274 00:49:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.274 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.274 00:49:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.274 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.274 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.274 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.274 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.275 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.840 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 nvme0n1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.099 00:49:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.357 nvme0n1 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.357 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.615 nvme0n1 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.615 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.616 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.873 nvme0n1 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.873 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.132 00:50:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.132 nvme0n1 00:30:56.132 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.132 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.132 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.132 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.132 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.390 00:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.293 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.294 00:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.551 nvme0n1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.551 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.809 nvme0n1 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.809 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.067 00:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.326 nvme0n1 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.326 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.327 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.894 nvme0n1 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.894 00:50:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.152 nvme0n1 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.152 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.410 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.411 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.073 nvme0n1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.073 00:50:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 nvme0n1 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.640 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.898 00:50:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.464 nvme0n1 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.464 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.465 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.031 nvme0n1 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.031 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.289 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.289 00:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.289 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.289 00:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.289 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.856 nvme0n1 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.856 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 nvme0n1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 nvme0n1 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.115 00:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.115 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.115 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.115 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.115 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 nvme0n1 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.375 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.635 nvme0n1 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.635 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.635 nvme0n1 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.894 nvme0n1 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.894 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.153 00:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.153 nvme0n1 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:05.153 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.154 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.412 nvme0n1 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:05.412 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.413 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.671 nvme0n1 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.671 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.930 nvme0n1 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.930 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.189 nvme0n1 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:06.189 00:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.189 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 nvme0n1 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.448 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.713 nvme0n1 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.714 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.974 nvme0n1 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.974 00:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.232 nvme0n1 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:07.232 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.233 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.797 nvme0n1 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.797 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.798 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.363 nvme0n1 00:31:08.363 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.363 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.363 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.363 00:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.363 00:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.363 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.620 nvme0n1 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.620 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.188 nvme0n1 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.188 00:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.446 nvme0n1 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.446 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:09.703 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.704 00:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 nvme0n1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.271 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.836 nvme0n1 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.836 00:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.768 nvme0n1 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.768 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.769 00:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.334 nvme0n1 00:31:12.334 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.334 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.334 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.335 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.911 nvme0n1 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:12.911 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.912 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 nvme0n1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 nvme0n1 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.170 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 nvme0n1 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.429 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.687 nvme0n1 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.687 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.688 nvme0n1 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.688 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.946 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.947 nvme0n1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.947 00:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.205 nvme0n1 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.205 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.206 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.464 nvme0n1 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.464 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 nvme0n1 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 nvme0n1 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.722 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.980 nvme0n1 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.980 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.238 00:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.496 nvme0n1 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:15.496 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.497 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.755 nvme0n1 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.755 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 nvme0n1 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.014 00:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 nvme0n1 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.272 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.273 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.273 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.273 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.530 nvme0n1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.789 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.047 nvme0n1 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.047 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.305 00:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.306 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.306 00:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.563 nvme0n1 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.563 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.564 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.132 nvme0n1 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.132 00:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 nvme0n1 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWFjZjk2NWY1OGYzMGQzYjY1MGJiYzNhYWUxYmQ2ZTK/BNOI: 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQyY2U0MDZkYzFmOGVmNDUyYTEwYzQ0MjU5OTM0Zjk4ZjQxNDc4ZTQ0ZjMzMDQ0MDAwY2UwNjQ5ZjZiOTY4ZWI3NOQ=: 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.391 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 nvme0n1 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.328 00:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.328 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.895 nvme0n1 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGI1MzcwOGM0ZDljZTNmYjc0NTVlZmY0Y2NiNzg4MmEzbX0B: 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: ]] 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njk1MTM0YmQ4ZjA3ZWJkNmQyMTc1NGIxMzIyNTAzYzZb2MKw: 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:19.895 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.896 00:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.462 nvme0n1 00:31:20.462 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.462 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.462 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.462 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.462 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjE0MWRhNWQxZWY1MjM0ZDNmMTRkYjhhYjVjMTA3ODgwZjE5YjUwZGE2MDAyZDRhXvxEbg==: 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4MzMxM2ZkZDQzZDZiOWQ5ZmU2NDdlNGMwYmMwMDICA2/B: 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.721 00:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.289 nvme0n1 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQzZDk3ZDJhYjU2ZGYxYjBiYjRlNDg4YzMyNzUyZWYzMWYzOWM0YWRkY2E3YjdiNWU3M2U0NDJkMGViOTQ3M40aTX4=: 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.289 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.226 nvme0n1 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1MzZjMGFlZjU2MmFiYWNlNjQyZWEyMTMyYTNlODIzZWJhMDhkOGMwNDkxMjYybJlQtA==: 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2FmZWFiNmE4NzQwZGE5YzNmZWQzZDA1MmUwMmFjMDBkYzFhMzRkN2NjZGRmZTM3AvA9yA==: 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.226 2024/07/12 00:50:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:22.226 request: 00:31:22.226 { 00:31:22.226 "method": "bdev_nvme_attach_controller", 00:31:22.226 "params": { 00:31:22.226 "name": "nvme0", 00:31:22.226 "trtype": "tcp", 00:31:22.226 "traddr": "10.0.0.1", 00:31:22.226 "adrfam": "ipv4", 00:31:22.226 "trsvcid": "4420", 00:31:22.226 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:22.226 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:22.226 "prchk_reftag": false, 00:31:22.226 "prchk_guard": false, 00:31:22.226 "hdgst": false, 00:31:22.226 "ddgst": false 00:31:22.226 } 00:31:22.226 } 00:31:22.226 Got JSON-RPC error response 00:31:22.226 GoRPCClient: error on JSON-RPC call 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:22.226 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.227 00:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.227 2024/07/12 00:50:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:22.227 request: 00:31:22.227 { 00:31:22.227 "method": "bdev_nvme_attach_controller", 00:31:22.227 "params": { 00:31:22.227 "name": "nvme0", 00:31:22.227 "trtype": "tcp", 00:31:22.227 "traddr": "10.0.0.1", 00:31:22.227 "adrfam": "ipv4", 00:31:22.227 "trsvcid": "4420", 00:31:22.227 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:22.227 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:22.227 "prchk_reftag": false, 00:31:22.227 "prchk_guard": false, 00:31:22.227 "hdgst": false, 00:31:22.227 "ddgst": false, 00:31:22.227 "dhchap_key": "key2" 00:31:22.227 } 00:31:22.227 } 00:31:22.227 Got JSON-RPC error response 00:31:22.227 GoRPCClient: error on JSON-RPC call 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.227 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.486 2024/07/12 00:50:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:22.486 request: 00:31:22.486 { 00:31:22.486 "method": "bdev_nvme_attach_controller", 00:31:22.486 "params": { 00:31:22.486 "name": "nvme0", 00:31:22.486 "trtype": "tcp", 00:31:22.486 "traddr": "10.0.0.1", 00:31:22.486 "adrfam": "ipv4", 00:31:22.486 "trsvcid": "4420", 00:31:22.486 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:22.486 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:22.486 "prchk_reftag": false, 00:31:22.486 "prchk_guard": false, 00:31:22.486 "hdgst": false, 00:31:22.486 "ddgst": false, 00:31:22.486 "dhchap_key": "key1", 00:31:22.486 "dhchap_ctrlr_key": "ckey2" 00:31:22.486 } 00:31:22.486 } 00:31:22.486 Got JSON-RPC error response 00:31:22.486 GoRPCClient: error on JSON-RPC call 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.486 rmmod nvme_tcp 00:31:22.486 rmmod nvme_fabrics 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 102617 ']' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 102617 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 102617 ']' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 102617 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102617 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:22.486 killing process with pid 102617 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102617' 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 102617 00:31:22.486 00:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 102617 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:23.860 00:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:24.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:24.424 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:24.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:24.681 00:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.S33 /tmp/spdk.key-null.afY /tmp/spdk.key-sha256.OCf /tmp/spdk.key-sha384.GuW /tmp/spdk.key-sha512.kCa /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:31:24.681 00:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:24.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:24.939 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:24.939 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:24.939 00:31:24.939 real 0m37.348s 00:31:24.939 user 0m32.999s 00:31:24.939 sys 0m4.083s 00:31:24.939 00:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:24.939 00:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.939 ************************************ 00:31:24.939 END TEST nvmf_auth_host 00:31:24.939 ************************************ 00:31:25.196 00:50:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:25.196 00:50:29 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:31:25.196 00:50:29 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:25.196 00:50:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:25.196 00:50:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.196 00:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.196 ************************************ 00:31:25.196 START TEST nvmf_digest 00:31:25.196 ************************************ 00:31:25.196 00:50:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:25.196 * Looking for test storage... 00:31:25.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:25.196 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:25.197 Cannot find device "nvmf_tgt_br" 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:25.197 Cannot find device "nvmf_tgt_br2" 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:25.197 Cannot find device "nvmf_tgt_br" 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:25.197 Cannot find device "nvmf_tgt_br2" 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:31:25.197 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:25.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:25.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:25.455 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:25.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:31:25.456 00:31:25.456 --- 10.0.0.2 ping statistics --- 00:31:25.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.456 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:25.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:25.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:31:25.456 00:31:25.456 --- 10.0.0.3 ping statistics --- 00:31:25.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.456 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:25.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:31:25.456 00:31:25.456 --- 10.0.0.1 ping statistics --- 00:31:25.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.456 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:25.456 00:50:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:25.714 ************************************ 00:31:25.714 START TEST nvmf_digest_clean 00:31:25.714 ************************************ 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=104214 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 104214 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104214 ']' 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:25.714 00:50:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:25.714 [2024-07-12 00:50:30.536913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:25.714 [2024-07-12 00:50:30.537107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.972 [2024-07-12 00:50:30.722149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.230 [2024-07-12 00:50:31.030640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.230 [2024-07-12 00:50:31.030740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.230 [2024-07-12 00:50:31.030772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.230 [2024-07-12 00:50:31.030790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.230 [2024-07-12 00:50:31.030804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.230 [2024-07-12 00:50:31.030857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.488 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:26.488 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:26.488 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:26.488 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:26.488 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.746 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:27.004 null0 00:31:27.004 [2024-07-12 00:50:31.806187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.004 [2024-07-12 00:50:31.830367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104267 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104267 /var/tmp/bperf.sock 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104267 ']' 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.004 00:50:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:27.262 [2024-07-12 00:50:31.958308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:27.262 [2024-07-12 00:50:31.958514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104267 ] 00:31:27.262 [2024-07-12 00:50:32.147868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.520 [2024-07-12 00:50:32.415671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.085 00:50:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.085 00:50:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:28.085 00:50:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:28.085 00:50:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:28.085 00:50:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:28.651 00:50:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:28.651 00:50:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.217 nvme0n1 00:31:29.217 00:50:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:29.217 00:50:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:29.217 Running I/O for 2 seconds... 00:31:31.115 00:31:31.115 Latency(us) 00:31:31.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.115 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:31.115 nvme0n1 : 2.01 14366.45 56.12 0.00 0.00 8899.02 4557.73 19899.11 00:31:31.115 =================================================================================================================== 00:31:31.115 Total : 14366.45 56.12 0.00 0.00 8899.02 4557.73 19899.11 00:31:31.115 0 00:31:31.115 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:31.115 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:31.115 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:31.115 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:31.115 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:31.115 | select(.opcode=="crc32c") 00:31:31.115 | "\(.module_name) \(.executed)"' 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104267 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104267 ']' 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104267 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104267 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:31.735 killing process with pid 104267 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104267' 00:31:31.735 Received shutdown signal, test time was about 2.000000 seconds 00:31:31.735 00:31:31.735 Latency(us) 00:31:31.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.735 =================================================================================================================== 00:31:31.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104267 00:31:31.735 00:50:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104267 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104370 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104370 /var/tmp/bperf.sock 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104370 ']' 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.670 00:50:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:32.928 [2024-07-12 00:50:37.616135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:32.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:32.928 Zero copy mechanism will not be used. 00:31:32.928 [2024-07-12 00:50:37.616434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104370 ] 00:31:32.928 [2024-07-12 00:50:37.786112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.186 [2024-07-12 00:50:38.076922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.751 00:50:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:33.751 00:50:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:33.751 00:50:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:33.751 00:50:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:33.751 00:50:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:34.316 00:50:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:34.316 00:50:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:34.573 nvme0n1 00:31:34.573 00:50:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:34.573 00:50:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:34.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:34.573 Zero copy mechanism will not be used. 00:31:34.573 Running I/O for 2 seconds... 00:31:37.101 00:31:37.101 Latency(us) 00:31:37.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.101 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:37.101 nvme0n1 : 2.00 5826.15 728.27 0.00 0.00 2741.70 763.35 5362.04 00:31:37.101 =================================================================================================================== 00:31:37.101 Total : 5826.15 728.27 0.00 0.00 2741.70 763.35 5362.04 00:31:37.101 0 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:37.101 | select(.opcode=="crc32c") 00:31:37.101 | "\(.module_name) \(.executed)"' 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104370 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104370 ']' 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104370 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104370 00:31:37.101 killing process with pid 104370 00:31:37.101 Received shutdown signal, test time was about 2.000000 seconds 00:31:37.101 00:31:37.101 Latency(us) 00:31:37.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.101 =================================================================================================================== 00:31:37.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104370' 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104370 00:31:37.101 00:50:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104370 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104467 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104467 /var/tmp/bperf.sock 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104467 ']' 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:38.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:38.476 00:50:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:38.476 [2024-07-12 00:50:43.287941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:38.476 [2024-07-12 00:50:43.288121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104467 ] 00:31:38.734 [2024-07-12 00:50:43.453854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.992 [2024-07-12 00:50:43.733166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.560 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:39.560 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:39.560 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:39.560 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:39.560 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:40.144 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.144 00:50:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.403 nvme0n1 00:31:40.403 00:50:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:40.403 00:50:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:40.403 Running I/O for 2 seconds... 00:31:42.933 00:31:42.933 Latency(us) 00:31:42.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.933 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.933 nvme0n1 : 2.01 16814.66 65.68 0.00 0.00 7603.90 3247.01 17635.14 00:31:42.933 =================================================================================================================== 00:31:42.933 Total : 16814.66 65.68 0.00 0.00 7603.90 3247.01 17635.14 00:31:42.933 0 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:42.933 | select(.opcode=="crc32c") 00:31:42.933 | "\(.module_name) \(.executed)"' 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104467 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104467 ']' 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104467 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104467 00:31:42.933 killing process with pid 104467 00:31:42.933 Received shutdown signal, test time was about 2.000000 seconds 00:31:42.933 00:31:42.933 Latency(us) 00:31:42.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.933 =================================================================================================================== 00:31:42.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104467' 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104467 00:31:42.933 00:50:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104467 00:31:43.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=104584 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 104584 /var/tmp/bperf.sock 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 104584 ']' 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.868 00:50:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:44.126 [2024-07-12 00:50:48.888264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:44.126 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:44.126 Zero copy mechanism will not be used. 00:31:44.127 [2024-07-12 00:50:48.888562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104584 ] 00:31:44.127 [2024-07-12 00:50:49.059696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.691 [2024-07-12 00:50:49.343807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.949 00:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.949 00:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:31:44.949 00:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:44.949 00:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:44.949 00:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:45.514 00:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:45.514 00:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:46.077 nvme0n1 00:31:46.077 00:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:46.077 00:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:46.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:46.077 Zero copy mechanism will not be used. 00:31:46.077 Running I/O for 2 seconds... 00:31:47.973 00:31:47.973 Latency(us) 00:31:47.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.973 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:47.973 nvme0n1 : 2.00 3689.45 461.18 0.00 0.00 4326.71 2234.18 6166.34 00:31:47.973 =================================================================================================================== 00:31:47.973 Total : 3689.45 461.18 0.00 0.00 4326.71 2234.18 6166.34 00:31:47.973 0 00:31:47.973 00:50:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:47.973 00:50:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:47.973 00:50:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:47.973 00:50:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:47.973 00:50:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:47.973 | select(.opcode=="crc32c") 00:31:47.973 | "\(.module_name) \(.executed)"' 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 104584 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104584 ']' 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104584 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:48.231 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104584 00:31:48.489 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:48.489 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:48.489 killing process with pid 104584 00:31:48.489 Received shutdown signal, test time was about 2.000000 seconds 00:31:48.489 00:31:48.489 Latency(us) 00:31:48.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.489 =================================================================================================================== 00:31:48.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.489 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104584' 00:31:48.489 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104584 00:31:48.489 00:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104584 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 104214 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 104214 ']' 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 104214 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104214 00:31:49.863 killing process with pid 104214 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104214' 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 104214 00:31:49.863 00:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 104214 00:31:51.239 ************************************ 00:31:51.239 END TEST nvmf_digest_clean 00:31:51.239 ************************************ 00:31:51.239 00:31:51.239 real 0m25.532s 00:31:51.239 user 0m47.354s 00:31:51.239 sys 0m5.516s 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:51.239 ************************************ 00:31:51.239 START TEST nvmf_digest_error 00:31:51.239 ************************************ 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=104725 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 104725 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104725 ']' 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.239 00:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:51.239 [2024-07-12 00:50:56.125234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:51.239 [2024-07-12 00:50:56.125426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.497 [2024-07-12 00:50:56.300643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.756 [2024-07-12 00:50:56.589084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.756 [2024-07-12 00:50:56.589173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.756 [2024-07-12 00:50:56.589190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.756 [2024-07-12 00:50:56.589205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.756 [2024-07-12 00:50:56.589217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.756 [2024-07-12 00:50:56.589269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.323 [2024-07-12 00:50:57.138390] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.323 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.890 null0 00:31:52.890 [2024-07-12 00:50:57.538018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.890 [2024-07-12 00:50:57.562183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104770 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104770 /var/tmp/bperf.sock 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104770 ']' 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:52.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:52.890 00:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:52.890 [2024-07-12 00:50:57.683050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:52.890 [2024-07-12 00:50:57.683254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104770 ] 00:31:53.148 [2024-07-12 00:50:57.860475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.407 [2024-07-12 00:50:58.111215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.665 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:53.665 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:53.665 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:53.665 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.232 00:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.491 nvme0n1 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:54.491 00:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:54.491 Running I/O for 2 seconds... 00:31:54.491 [2024-07-12 00:50:59.407884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.491 [2024-07-12 00:50:59.407990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.491 [2024-07-12 00:50:59.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.491 [2024-07-12 00:50:59.424918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.491 [2024-07-12 00:50:59.425019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.491 [2024-07-12 00:50:59.425046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.439872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.439971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.439996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.459437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.459536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.459562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.475423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.475518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.475558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.491913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.492013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.492037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.509208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.509307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.509333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.527031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.527128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.527153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.545378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.545485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.545510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.564078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.564180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.564205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.582705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.582801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.582827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.599817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.599924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.599950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.618569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.618692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.634183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.634301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.653882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.653985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.654010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.750 [2024-07-12 00:50:59.672821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:54.750 [2024-07-12 00:50:59.672927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.750 [2024-07-12 00:50:59.672951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.691811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.691905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.691930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.710096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.710217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.726653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.726749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.726775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.744271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.744378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.744419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.759213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.759311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.759335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.777983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.778173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.778204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.796087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.796202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.796227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.813821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.010 [2024-07-12 00:50:59.813916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.010 [2024-07-12 00:50:59.813941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.010 [2024-07-12 00:50:59.834202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.834298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.834323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.011 [2024-07-12 00:50:59.854049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.854145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.854171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.011 [2024-07-12 00:50:59.873728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.873825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.873851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.011 [2024-07-12 00:50:59.892971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.893072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.893115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.011 [2024-07-12 00:50:59.909346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.909456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.909483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.011 [2024-07-12 00:50:59.927701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.011 [2024-07-12 00:50:59.927793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.011 [2024-07-12 00:50:59.927818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:50:59.945242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:50:59.945338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.270 [2024-07-12 00:50:59.945364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:50:59.964847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:50:59.964948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.270 [2024-07-12 00:50:59.964973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:50:59.983572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:50:59.983671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.270 [2024-07-12 00:50:59.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:51:00.002356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:51:00.002462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.270 [2024-07-12 00:51:00.002487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:51:00.020886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:51:00.020994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.270 [2024-07-12 00:51:00.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.270 [2024-07-12 00:51:00.040019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.270 [2024-07-12 00:51:00.040115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.040139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.058250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.058342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.058367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.076775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.076865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.076889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.095419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.095519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.095543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.114542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.114660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.133299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.133388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.133429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.152354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.152460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.152484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.171473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.171570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.271 [2024-07-12 00:51:00.189852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.271 [2024-07-12 00:51:00.189945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.271 [2024-07-12 00:51:00.189970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.530 [2024-07-12 00:51:00.209208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.530 [2024-07-12 00:51:00.209296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.530 [2024-07-12 00:51:00.209320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.530 [2024-07-12 00:51:00.228010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.530 [2024-07-12 00:51:00.228119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.530 [2024-07-12 00:51:00.228143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.246547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.246643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.264717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.264810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.283128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.283223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.283248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.301630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.301720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.301743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.320256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.320343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.320367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.341449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.341541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.341565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.365527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.365628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.365652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.387962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.388069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.388106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.406324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.406431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.406457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.425090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.425183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.425207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.444032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.444144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.444170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.531 [2024-07-12 00:51:00.462417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.531 [2024-07-12 00:51:00.462504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.531 [2024-07-12 00:51:00.462529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.478132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.478222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.478247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.496987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.497090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.497115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.515956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.516046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.516071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.534302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.534413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.534440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.553138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.553253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.571538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.571628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.571652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.589732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.589822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.589847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.607898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.607989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.608014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.626748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.626837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.626862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.645242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.645337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.645362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.664847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.664963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.685847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.685942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.685969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.703669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.703761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.703786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.790 [2024-07-12 00:51:00.722424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:55.790 [2024-07-12 00:51:00.722512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.790 [2024-07-12 00:51:00.722537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.741207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.741301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.741326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.759686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.759782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.759808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.778308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.778413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.778440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.797524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.797641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.797666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.815785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.815882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.815908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.833970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.834061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.834085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.855587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.855691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.855716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.873837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.873927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.892354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.892458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.892483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.910388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.910486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.910510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.929181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.929264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.049 [2024-07-12 00:51:00.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.049 [2024-07-12 00:51:00.946069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.049 [2024-07-12 00:51:00.946167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.050 [2024-07-12 00:51:00.946192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.050 [2024-07-12 00:51:00.963171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.050 [2024-07-12 00:51:00.963260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.050 [2024-07-12 00:51:00.963284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.050 [2024-07-12 00:51:00.981620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.050 [2024-07-12 00:51:00.981710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.050 [2024-07-12 00:51:00.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.309 [2024-07-12 00:51:01.000879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.309 [2024-07-12 00:51:01.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.309 [2024-07-12 00:51:01.000999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.309 [2024-07-12 00:51:01.019495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.309 [2024-07-12 00:51:01.019579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.309 [2024-07-12 00:51:01.019603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.309 [2024-07-12 00:51:01.038015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.309 [2024-07-12 00:51:01.038104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.309 [2024-07-12 00:51:01.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.309 [2024-07-12 00:51:01.057053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.309 [2024-07-12 00:51:01.057155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.309 [2024-07-12 00:51:01.057182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.073379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.073492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.091623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.091701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.091725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.109564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.109656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.109680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.127764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.127879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.145697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.145786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.145811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.167184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.167276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.185217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.185309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.185334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.203598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.203691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.203715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.222327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.222434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.222460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.310 [2024-07-12 00:51:01.240517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.310 [2024-07-12 00:51:01.240617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.310 [2024-07-12 00:51:01.240641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.259235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.259325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.259351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.277642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.277735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.277759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.296008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.296099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.296123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.314219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.314314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.314339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.333154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.333245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.333270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.351878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.351980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-12 00:51:01.368707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:56.569 [2024-07-12 00:51:01.368797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-12 00:51:01.368821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 00:31:56.569 Latency(us) 00:31:56.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.569 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:56.569 nvme0n1 : 2.01 13699.74 53.51 0.00 0.00 9330.12 5332.25 23592.96 00:31:56.569 =================================================================================================================== 00:31:56.569 Total : 13699.74 53.51 0.00 0.00 9330.12 5332.25 23592.96 00:31:56.569 0 00:31:56.569 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:56.569 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:56.569 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:56.569 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:56.569 | .driver_specific 00:31:56.569 | .nvme_error 00:31:56.569 | .status_code 00:31:56.569 | .command_transient_transport_error' 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 107 > 0 )) 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104770 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104770 ']' 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104770 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:56.827 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104770 00:31:57.086 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:57.086 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:57.086 killing process with pid 104770 00:31:57.086 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104770' 00:31:57.086 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104770 00:31:57.086 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.086 00:31:57.086 Latency(us) 00:31:57.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.086 =================================================================================================================== 00:31:57.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.086 00:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104770 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104862 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104862 /var/tmp/bperf.sock 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104862 ']' 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:58.019 00:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:58.019 [2024-07-12 00:51:02.891359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:58.019 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:58.019 Zero copy mechanism will not be used. 00:31:58.019 [2024-07-12 00:51:02.891556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104862 ] 00:31:58.276 [2024-07-12 00:51:03.063705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.533 [2024-07-12 00:51:03.355223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.098 00:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:59.098 00:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:31:59.098 00:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.098 00:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.356 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.614 nvme0n1 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:59.614 00:51:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:59.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:59.614 Zero copy mechanism will not be used. 00:31:59.614 Running I/O for 2 seconds... 00:31:59.614 [2024-07-12 00:51:04.533912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.614 [2024-07-12 00:51:04.534008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.614 [2024-07-12 00:51:04.534048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.614 [2024-07-12 00:51:04.540770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.614 [2024-07-12 00:51:04.540832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.614 [2024-07-12 00:51:04.540856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.614 [2024-07-12 00:51:04.547118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.614 [2024-07-12 00:51:04.547174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.614 [2024-07-12 00:51:04.547195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.553867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.553921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.553943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.559930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.559981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.560001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.566687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.566742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.566764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.573062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.573122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.573144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.579463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.579520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.579542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.585897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.585972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.585994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.592450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.592518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.592554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.599022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.599073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.599094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.605167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.605217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.605238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.611700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.611749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.611770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.618208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.618280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.624576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.624624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.624644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.631132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.631183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.631203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.637442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.874 [2024-07-12 00:51:04.637507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.874 [2024-07-12 00:51:04.643688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.874 [2024-07-12 00:51:04.643737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.643757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.650012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.650076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.650097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.656604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.656653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.656672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.662797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.662850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.662870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.669111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.669167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.669187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.675771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.675834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.675856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.682342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.682427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.688899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.688973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.688996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.694477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.694566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.701417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.701516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.701539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.707813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.707874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.707898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.714667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.721317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.721387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.721449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.727945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.728042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.728063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.734597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.734668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.734704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.741009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.741075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.741123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.747473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.747527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.747548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.753974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.754042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.754080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.760619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.760853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.761006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.767215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.767430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.767590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.774072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.774288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.774476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.781043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.781274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.781458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.787942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.788174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.788351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.794907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.795421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.875 [2024-07-12 00:51:04.802013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:31:59.875 [2024-07-12 00:51:04.802266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.875 [2024-07-12 00:51:04.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.135 [2024-07-12 00:51:04.809028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.135 [2024-07-12 00:51:04.809236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.135 [2024-07-12 00:51:04.809451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.135 [2024-07-12 00:51:04.816099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.135 [2024-07-12 00:51:04.816327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.135 [2024-07-12 00:51:04.816580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.135 [2024-07-12 00:51:04.823140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.135 [2024-07-12 00:51:04.823366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.135 [2024-07-12 00:51:04.823551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.135 [2024-07-12 00:51:04.830074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.830281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.830513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.837026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.837083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.837106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.843361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.843462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.843485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.849845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.849915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.849938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.856114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.856170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.856191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.862661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.862738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.868914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.869005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.869052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.875325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.875381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.875467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.882095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.882175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.888820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.888907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.888938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.895550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.895648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.902269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.908943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.908997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.909019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.915525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.915581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.915618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.922243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.922300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.922322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.928885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.928970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.929008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.935427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.935487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.935509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.942128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.942199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.942222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.948498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.948585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.948608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.954849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.954919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.954941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.961813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.962149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.962377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.968740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.968793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.968815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.975094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.975143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.975164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.981439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.981486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.981505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.987762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.987815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.987837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:04.994193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:04.994244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:04.994265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:05.000722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:05.000773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.136 [2024-07-12 00:51:05.000794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.136 [2024-07-12 00:51:05.007291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.136 [2024-07-12 00:51:05.007343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.007364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.014083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.014152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.014173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.020774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.020826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.020846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.027238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.027308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.027330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.032110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.032170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.032201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.037771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.037823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.037844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.044007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.044065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.044086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.050605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.050661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.050682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.057074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.057130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.057150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.137 [2024-07-12 00:51:05.063552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.137 [2024-07-12 00:51:05.063604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.137 [2024-07-12 00:51:05.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.070088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.070148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.070169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.076689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.076738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.076758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.082847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.082908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.082928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.089206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.089256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.089276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.095598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.095665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.102090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.102147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.108650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.108698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.108717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.115149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.115196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.115216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.121348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.121451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.121471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.127748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.127795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.127814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.134467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.134522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.134542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.140817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.140865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.140884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.147120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.147169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.147188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.153524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.153586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.153604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.159786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.159850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.159869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.166171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.166240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.172559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.172608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.172629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.179081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.179136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.179156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.185441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.185501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.185521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.192055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.192103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.192123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.198361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.198421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.198442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.205204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.205255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.205274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.211726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.211791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.211810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.218280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.218331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.218352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.224753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.224812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.224833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.231414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.397 [2024-07-12 00:51:05.231508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.397 [2024-07-12 00:51:05.231529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.397 [2024-07-12 00:51:05.237965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.238048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.238069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.244232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.244293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.244315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.250698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.250768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.250789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.257408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.257485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.257509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.263982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.264091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.264124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.270824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.270913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.270937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.277625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.277700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.277722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.284176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.284245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.284266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.290640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.290733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.297822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.297916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.297939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.305455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.305548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.305580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.312248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.312318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.312340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.318742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.318807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.318830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.398 [2024-07-12 00:51:05.325142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.398 [2024-07-12 00:51:05.325213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.398 [2024-07-12 00:51:05.325235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.331499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.331552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.331573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.337763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.337822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.337843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.341810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.341859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.341879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.348304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.348362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.348383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.354775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.354825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.354845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.360962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.361012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.361032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.367206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.367256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.373536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.373588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.373608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.379819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.379874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.379894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.386201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.386253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.386274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.392636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.392699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.392720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.399058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.399120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.658 [2024-07-12 00:51:05.399142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.658 [2024-07-12 00:51:05.404610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.658 [2024-07-12 00:51:05.404671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.404692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.408863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.408940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.408974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.415446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.415508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.415530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.421810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.421886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.421908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.428710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.428793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.428815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.435577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.435668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.435691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.442110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.442175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.442197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.448812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.448881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.448903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.455176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.455241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.455263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.461840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.461906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.461927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.468202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.468253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.468274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.474522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.474573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.474593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.481163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.481217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.481238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.487626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.487694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.487714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.494440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.494500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.494520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.501351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.501444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.501465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.508066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.508115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.508135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.514678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.514724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.514743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.521239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.521296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.521316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.527879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.527942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.527976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.534545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.534597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.534617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.540979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.541061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.541081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.547226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.547274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.547294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.553716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.553764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.553784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.560245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.560294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.560313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.566816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.566861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.573204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.573267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.573305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.579629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.579689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.579709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.659 [2024-07-12 00:51:05.586188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.659 [2024-07-12 00:51:05.586240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.659 [2024-07-12 00:51:05.586260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.592609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.592658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.592678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.599081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.599130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.599150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.605664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.605714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.612141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.612197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.612218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.618879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.618950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.618970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.625167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.625231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.625253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.631848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.631900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.631919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.638379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.638474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.638496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.645014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.645109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.651754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.651832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.651851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.658254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.658306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.658327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.664831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.664889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.664908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.671282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.671335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.671356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.677572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.677620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.677641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.684079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.684147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.690449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.690497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.696804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.696882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.696901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.703187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.703236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.703256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.709570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.709633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.709653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.716293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.716344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.716365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.723020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.723069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.729553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.729616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.729636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.735753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.735815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.919 [2024-07-12 00:51:05.742321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.919 [2024-07-12 00:51:05.742371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.919 [2024-07-12 00:51:05.742404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.748639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.748691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.748713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.754904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.754953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.754973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.761477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.761528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.761548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.767825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.767878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.767899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.774249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.774301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.774321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.780780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.780831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.780852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.787238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.787287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.793342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.793435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.793457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.798174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.798243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.804306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.804358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.804378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.810861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.810925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.810946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.817233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.817285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.821378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.821437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.821458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.827839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.827903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.827923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.834593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.834646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.834667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.841223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.841275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.841295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.845637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.845708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.920 [2024-07-12 00:51:05.850948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:00.920 [2024-07-12 00:51:05.850997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.920 [2024-07-12 00:51:05.851019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.857474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.857528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.863863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.863913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.863934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.870331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.870409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.876663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.876719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.876740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.883249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.883307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.883328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.889669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.889727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.896270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.896336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.902699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.902760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.902781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.909085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.909146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.909173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.915569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.915639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.915659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.921965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.922020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.922040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.928189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.928241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.928262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.934823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.934874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.934895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.941015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.941076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.941106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.947469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.947533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.947553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.953631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.953682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.953701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.959792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.959856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.959877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.965732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.965782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.180 [2024-07-12 00:51:05.965802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.180 [2024-07-12 00:51:05.972026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.180 [2024-07-12 00:51:05.972080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:05.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:05.978167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:05.978219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:05.978239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:05.984594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:05.984648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:05.984668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:05.991150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:05.991203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:05.991224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:05.997536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:05.997587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:05.997608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.003736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.003806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.010284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.010338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.010358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.016635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.016685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.020729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.020790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.020809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.026838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.026893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.026913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.032280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.032334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.032354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.037325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.037374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.037407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.042369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.042453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.047358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.047423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.047444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.053001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.053065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.053094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.058180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.058233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.058252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.063381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.063443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.069002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.069053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.069073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.074487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.074584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.074607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.079623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.079680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.079700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.084987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.085066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.085087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.090619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.090674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.090694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.096808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.096880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.096900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.101545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.101605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.101625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.181 [2024-07-12 00:51:06.107349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.181 [2024-07-12 00:51:06.107444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.181 [2024-07-12 00:51:06.107465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.114377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.114453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.114479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.121155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.121219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.121239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.125967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.126044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.126079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.132193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.132257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.132277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.138884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.138934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.138953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.143597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.143644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.143662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.149420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.149485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.149506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.155957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.156022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.156072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.160251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.160302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.160322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.166023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.166119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.166139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.170608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.170656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.441 [2024-07-12 00:51:06.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.441 [2024-07-12 00:51:06.176401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.441 [2024-07-12 00:51:06.176463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.183153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.183205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.183226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.190002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.190084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.190104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.194711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.194759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.194779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.200008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.200080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.200100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.206144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.206196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.206216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.210591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.210638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.210657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.216154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.216206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.216226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.220614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.220661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.226564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.226616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.226637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.233159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.233213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.233234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.238127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.238177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.238196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.244022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.244090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.244111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.250689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.250764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.250785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.257059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.257110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.257130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.263745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.263812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.263832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.270374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.270457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.277069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.277123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.277143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.283785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.283835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.283854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.290430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.290526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.290546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.297198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.297252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.297272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.303778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.303858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.303878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.310357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.310438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.310473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.316959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.317008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.317037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.323651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.323731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.323750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.330170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.330235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.442 [2024-07-12 00:51:06.330254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.442 [2024-07-12 00:51:06.336513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.442 [2024-07-12 00:51:06.336571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.336591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.443 [2024-07-12 00:51:06.342981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.443 [2024-07-12 00:51:06.343059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.343078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.443 [2024-07-12 00:51:06.349316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.443 [2024-07-12 00:51:06.349396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.349415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.443 [2024-07-12 00:51:06.355674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.443 [2024-07-12 00:51:06.355738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.355775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.443 [2024-07-12 00:51:06.362319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.443 [2024-07-12 00:51:06.362385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.362419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.443 [2024-07-12 00:51:06.368768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.443 [2024-07-12 00:51:06.368819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.443 [2024-07-12 00:51:06.368838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.375235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.375301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.375321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.381340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.381433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.381469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.387545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.387607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.387641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.392433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.392496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.392516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.398683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.398760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.398780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.405460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.405548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.409869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.409930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.409949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.416095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.416159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.416178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.421883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.421944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.421962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.426075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.426138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.426158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.431671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.431734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.431752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.437296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.437361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.437409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.442866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.442930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.442949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.447567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.447650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.453853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.453915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.453933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.459889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.459951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.459971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.464372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.464429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.464450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.470882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.470963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.470997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.476592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.476643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.476663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.481243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.703 [2024-07-12 00:51:06.481293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.703 [2024-07-12 00:51:06.481313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.703 [2024-07-12 00:51:06.487914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.487965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.487985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.492597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.492665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.498411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.498555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.505678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.505741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.512330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.512382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.518717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.518797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.518817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.704 [2024-07-12 00:51:06.525209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:32:01.704 [2024-07-12 00:51:06.525260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.704 [2024-07-12 00:51:06.525280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.704 00:32:01.704 Latency(us) 00:32:01.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:01.704 nvme0n1 : 2.00 4933.88 616.73 0.00 0.00 3237.92 815.48 8817.57 00:32:01.704 =================================================================================================================== 00:32:01.704 Total : 4933.88 616.73 0.00 0.00 3237.92 815.48 8817.57 00:32:01.704 0 00:32:01.704 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:01.704 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:01.704 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:01.704 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:01.704 | .driver_specific 00:32:01.704 | .nvme_error 00:32:01.704 | .status_code 00:32:01.704 | .command_transient_transport_error' 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 318 > 0 )) 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104862 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104862 ']' 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104862 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104862 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:01.962 killing process with pid 104862 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104862' 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104862 00:32:01.962 Received shutdown signal, test time was about 2.000000 seconds 00:32:01.962 00:32:01.962 Latency(us) 00:32:01.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.962 =================================================================================================================== 00:32:01.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.962 00:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104862 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=104959 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 104959 /var/tmp/bperf.sock 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 104959 ']' 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.336 00:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:03.336 [2024-07-12 00:51:08.099665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:03.336 [2024-07-12 00:51:08.099840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104959 ] 00:32:03.336 [2024-07-12 00:51:08.265878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.594 [2024-07-12 00:51:08.496016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.205 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.205 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:04.205 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.205 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.463 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.735 nvme0n1 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:04.735 00:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:04.993 Running I/O for 2 seconds... 00:32:04.993 [2024-07-12 00:51:09.735007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:32:04.993 [2024-07-12 00:51:09.736412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.736475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.753847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:32:04.993 [2024-07-12 00:51:09.756082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.756145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.764461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:32:04.993 [2024-07-12 00:51:09.765443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.765510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.781782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:04.993 [2024-07-12 00:51:09.783602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.783661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.794935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:32:04.993 [2024-07-12 00:51:09.796432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.796492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.808655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:32:04.993 [2024-07-12 00:51:09.810076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.810133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.825581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:32:04.993 [2024-07-12 00:51:09.827798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.827857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.835687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:32:04.993 [2024-07-12 00:51:09.836788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.836863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.853979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:32:04.993 [2024-07-12 00:51:09.855925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.856000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.867429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:32:04.993 [2024-07-12 00:51:09.869741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.883572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:32:04.993 [2024-07-12 00:51:09.884722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.884769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.897689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:32:04.993 [2024-07-12 00:51:09.898662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.898720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:04.993 [2024-07-12 00:51:09.911546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:32:04.993 [2024-07-12 00:51:09.912317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.993 [2024-07-12 00:51:09.912363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:05.251 [2024-07-12 00:51:09.928152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:32:05.251 [2024-07-12 00:51:09.929866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.251 [2024-07-12 00:51:09.929924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:05.251 [2024-07-12 00:51:09.941969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:05.252 [2024-07-12 00:51:09.943483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:09.943527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:09.956309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:32:05.252 [2024-07-12 00:51:09.957699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:09.957760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:09.970523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:32:05.252 [2024-07-12 00:51:09.971713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:09.971757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:09.984658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:32:05.252 [2024-07-12 00:51:09.985669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:09.985725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.003471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:32:05.252 [2024-07-12 00:51:10.005584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.005630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.014243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:05.252 [2024-07-12 00:51:10.015433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.032624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:32:05.252 [2024-07-12 00:51:10.034617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.034678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.046583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:32:05.252 [2024-07-12 00:51:10.048254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.048299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.061484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:32:05.252 [2024-07-12 00:51:10.063146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.063205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.080062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:32:05.252 [2024-07-12 00:51:10.082494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.082555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.090878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:05.252 [2024-07-12 00:51:10.092262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.109434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:32:05.252 [2024-07-12 00:51:10.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.111592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.120136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:32:05.252 [2024-07-12 00:51:10.121160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.121216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.138558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:32:05.252 [2024-07-12 00:51:10.140270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.140314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.152225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:32:05.252 [2024-07-12 00:51:10.153667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.153725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.166391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:32:05.252 [2024-07-12 00:51:10.167852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.252 [2024-07-12 00:51:10.167909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:05.252 [2024-07-12 00:51:10.184018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:32:05.510 [2024-07-12 00:51:10.186160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.510 [2024-07-12 00:51:10.186218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:05.510 [2024-07-12 00:51:10.194776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:32:05.510 [2024-07-12 00:51:10.195812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.510 [2024-07-12 00:51:10.195868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:05.510 [2024-07-12 00:51:10.212654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:32:05.511 [2024-07-12 00:51:10.214393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.214458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.226260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:32:05.511 [2024-07-12 00:51:10.227817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.227860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.240361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:32:05.511 [2024-07-12 00:51:10.241847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.241918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.258015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:32:05.511 [2024-07-12 00:51:10.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.260312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.268223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:32:05.511 [2024-07-12 00:51:10.269317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.269372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.285324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:32:05.511 [2024-07-12 00:51:10.287272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.287329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.298751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:05.511 [2024-07-12 00:51:10.300344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.300389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.312583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:32:05.511 [2024-07-12 00:51:10.314148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.314205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.329668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:32:05.511 [2024-07-12 00:51:10.332117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.332173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.339793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:32:05.511 [2024-07-12 00:51:10.341069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.341126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.356796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:05.511 [2024-07-12 00:51:10.358742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.358797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.369724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:05.511 [2024-07-12 00:51:10.371336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.371392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.383149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:32:05.511 [2024-07-12 00:51:10.384891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.384961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.396099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:32:05.511 [2024-07-12 00:51:10.397424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.409671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:05.511 [2024-07-12 00:51:10.411001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.411057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.426465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:32:05.511 [2024-07-12 00:51:10.428470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.428514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:05.511 [2024-07-12 00:51:10.436233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:05.511 [2024-07-12 00:51:10.437177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.511 [2024-07-12 00:51:10.437233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.453040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:32:05.769 [2024-07-12 00:51:10.454801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.454873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.466104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:32:05.769 [2024-07-12 00:51:10.467516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.467571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.479647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:32:05.769 [2024-07-12 00:51:10.481071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.481129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.496535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:32:05.769 [2024-07-12 00:51:10.498733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.498788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.506601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:05.769 [2024-07-12 00:51:10.507672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.507727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.523086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:32:05.769 [2024-07-12 00:51:10.524993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.525051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.536971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:32:05.769 [2024-07-12 00:51:10.538791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.769 [2024-07-12 00:51:10.538863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:05.769 [2024-07-12 00:51:10.551362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:32:05.770 [2024-07-12 00:51:10.552934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.552991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.568077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:05.770 [2024-07-12 00:51:10.570309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.570365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.578220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:32:05.770 [2024-07-12 00:51:10.579393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.579475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.594701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:32:05.770 [2024-07-12 00:51:10.596637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.596689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.607804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:32:05.770 [2024-07-12 00:51:10.609348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.609430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.621909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:32:05.770 [2024-07-12 00:51:10.623555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.623596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.639351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:05.770 [2024-07-12 00:51:10.641700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.641757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.649206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:32:05.770 [2024-07-12 00:51:10.650505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.650577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.665347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:32:05.770 [2024-07-12 00:51:10.667345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.667399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.677955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:05.770 [2024-07-12 00:51:10.679672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.679730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.691276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:32:05.770 [2024-07-12 00:51:10.692998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.770 [2024-07-12 00:51:10.693053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:05.770 [2024-07-12 00:51:10.703775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:32:06.028 [2024-07-12 00:51:10.705127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.705183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.717100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:06.028 [2024-07-12 00:51:10.718368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.718436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.733525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:32:06.028 [2024-07-12 00:51:10.735563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.735635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.743363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:06.028 [2024-07-12 00:51:10.744376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.744426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.760277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:32:06.028 [2024-07-12 00:51:10.762167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.762222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.773810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:32:06.028 [2024-07-12 00:51:10.774955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.775015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.786904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:06.028 [2024-07-12 00:51:10.787733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.787809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.802543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:32:06.028 [2024-07-12 00:51:10.804316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.804374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.817153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:32:06.028 [2024-07-12 00:51:10.819358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.819424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.827381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:06.028 [2024-07-12 00:51:10.828415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.828469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.844067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:32:06.028 [2024-07-12 00:51:10.845940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.845994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.856992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:32:06.028 [2024-07-12 00:51:10.858554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.858609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.870365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:32:06.028 [2024-07-12 00:51:10.871868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.871925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.887019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:06.028 [2024-07-12 00:51:10.889340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.889399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.897147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:32:06.028 [2024-07-12 00:51:10.898301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.898357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.914102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:32:06.028 [2024-07-12 00:51:10.916019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.916074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.926696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:32:06.028 [2024-07-12 00:51:10.928267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.928324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.939679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:32:06.028 [2024-07-12 00:51:10.941246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.941303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:06.028 [2024-07-12 00:51:10.956475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:32:06.028 [2024-07-12 00:51:10.958758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.028 [2024-07-12 00:51:10.958813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:10.966579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:32:06.286 [2024-07-12 00:51:10.967827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:10.967870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:10.985299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:32:06.286 [2024-07-12 00:51:10.987349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:10.987404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:10.999299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:06.286 [2024-07-12 00:51:11.001235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.001294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.014087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:32:06.286 [2024-07-12 00:51:11.015810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.015866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.026777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:32:06.286 [2024-07-12 00:51:11.028095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.028150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.039437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:32:06.286 [2024-07-12 00:51:11.040781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.040855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.055129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:32:06.286 [2024-07-12 00:51:11.057273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.057329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.064939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:06.286 [2024-07-12 00:51:11.065966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.066019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.080553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:32:06.286 [2024-07-12 00:51:11.082282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.082351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.093430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:06.286 [2024-07-12 00:51:11.094924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.094981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.107138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:32:06.286 [2024-07-12 00:51:11.108593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.125978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:32:06.286 [2024-07-12 00:51:11.128328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.128388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.137074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:32:06.286 [2024-07-12 00:51:11.138175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.138234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.155280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:32:06.286 [2024-07-12 00:51:11.157203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.169545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:32:06.286 [2024-07-12 00:51:11.171109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.171166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.183229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:32:06.286 [2024-07-12 00:51:11.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.184912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.199745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:32:06.286 [2024-07-12 00:51:11.201967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.202023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:06.286 [2024-07-12 00:51:11.209862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:32:06.286 [2024-07-12 00:51:11.211039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.286 [2024-07-12 00:51:11.211095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.226917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:32:06.544 [2024-07-12 00:51:11.228928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.228995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.240191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:32:06.544 [2024-07-12 00:51:11.241897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.241953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.254969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:32:06.544 [2024-07-12 00:51:11.256672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.256716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.273046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:06.544 [2024-07-12 00:51:11.275520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.283899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:32:06.544 [2024-07-12 00:51:11.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.285255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.301422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:32:06.544 [2024-07-12 00:51:11.303478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.303535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.311420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:32:06.544 [2024-07-12 00:51:11.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.312382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.328255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:32:06.544 [2024-07-12 00:51:11.329994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.341227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:06.544 [2024-07-12 00:51:11.342643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.342699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.354941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:32:06.544 [2024-07-12 00:51:11.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.356372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.371466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:32:06.544 [2024-07-12 00:51:11.373633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.381727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:32:06.544 [2024-07-12 00:51:11.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.382811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.398545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:32:06.544 [2024-07-12 00:51:11.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.411553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:32:06.544 [2024-07-12 00:51:11.413054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.413112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.425236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:32:06.544 [2024-07-12 00:51:11.426724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.544 [2024-07-12 00:51:11.426796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:06.544 [2024-07-12 00:51:11.442538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:32:06.545 [2024-07-12 00:51:11.444848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.545 [2024-07-12 00:51:11.444921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.545 [2024-07-12 00:51:11.452655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:32:06.545 [2024-07-12 00:51:11.453802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.545 [2024-07-12 00:51:11.453857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:06.545 [2024-07-12 00:51:11.469818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:32:06.545 [2024-07-12 00:51:11.471701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.545 [2024-07-12 00:51:11.471743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.483221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:32:06.803 [2024-07-12 00:51:11.484968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.485026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.497027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:32:06.803 [2024-07-12 00:51:11.498588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.511488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:32:06.803 [2024-07-12 00:51:11.512456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.512513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.530473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:32:06.803 [2024-07-12 00:51:11.533078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.533130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.542169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:32:06.803 [2024-07-12 00:51:11.543477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.543539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.557682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:32:06.803 [2024-07-12 00:51:11.558795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.558868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.577486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:32:06.803 [2024-07-12 00:51:11.579632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.579687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.592607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:32:06.803 [2024-07-12 00:51:11.594432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.594506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.608173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:32:06.803 [2024-07-12 00:51:11.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.610035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.627755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:32:06.803 [2024-07-12 00:51:11.630444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.630501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.639359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:32:06.803 [2024-07-12 00:51:11.640759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.640814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.659311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:32:06.803 [2024-07-12 00:51:11.661642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.661700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.670801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:32:06.803 [2024-07-12 00:51:11.671818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.671872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.690259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:32:06.803 [2024-07-12 00:51:11.692167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.703911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:32:06.803 [2024-07-12 00:51:11.705427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.705515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:06.803 [2024-07-12 00:51:11.718203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:32:06.803 [2024-07-12 00:51:11.719625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.803 [2024-07-12 00:51:11.719681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:06.803 00:32:06.803 Latency(us) 00:32:06.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.803 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.803 nvme0n1 : 2.01 17564.42 68.61 0.00 0.00 7279.74 3410.85 20256.58 00:32:06.803 =================================================================================================================== 00:32:06.803 Total : 17564.42 68.61 0.00 0.00 7279.74 3410.85 20256.58 00:32:06.803 0 00:32:07.061 00:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:07.061 00:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:07.061 | .driver_specific 00:32:07.061 | .nvme_error 00:32:07.061 | .status_code 00:32:07.061 | .command_transient_transport_error' 00:32:07.061 00:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:07.061 00:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 104959 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104959 ']' 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104959 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104959 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:07.318 killing process with pid 104959 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104959' 00:32:07.318 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104959 00:32:07.318 Received shutdown signal, test time was about 2.000000 seconds 00:32:07.318 00:32:07.318 Latency(us) 00:32:07.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.319 =================================================================================================================== 00:32:07.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.319 00:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104959 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=105057 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 105057 /var/tmp/bperf.sock 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 105057 ']' 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:08.262 00:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:08.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:08.526 Zero copy mechanism will not be used. 00:32:08.526 [2024-07-12 00:51:13.243696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:08.526 [2024-07-12 00:51:13.243870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105057 ] 00:32:08.526 [2024-07-12 00:51:13.406762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.783 [2024-07-12 00:51:13.680793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.384 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.384 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:09.384 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:09.384 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.642 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.900 nvme0n1 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:09.900 00:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.159 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.159 Zero copy mechanism will not be used. 00:32:10.159 Running I/O for 2 seconds... 00:32:10.159 [2024-07-12 00:51:14.861568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.861964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.862035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.868862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.869268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.869318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.875910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.876311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.876376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.883089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.883515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.883560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.890067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.890578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.896900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.897265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.897328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.903591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.903953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.903999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.910143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.910513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.910559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.916690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.923203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.923559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.923605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.929738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.930088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.930135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.936298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.936776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.943342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.943706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.950188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.950590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.950635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.957089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.957464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.957524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.963899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.964305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.970640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.970961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.971006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.977610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.977951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.978007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.984604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.984967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.985027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.991330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.991683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.991728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:14.998371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:14.998724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:14.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:15.005733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:15.006128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:15.006177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:15.013516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:15.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:15.013896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:15.020622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:15.021019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.159 [2024-07-12 00:51:15.021065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.159 [2024-07-12 00:51:15.027941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.159 [2024-07-12 00:51:15.028302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.028349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.035355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.035746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.035803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.042748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.043109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.043156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.050498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.050832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.050880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.058024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.058362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.058433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.065493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.065828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.073097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.073426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.073484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.080661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.081060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.081104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.160 [2024-07-12 00:51:15.088469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.160 [2024-07-12 00:51:15.088851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.160 [2024-07-12 00:51:15.088914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.095972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.096318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.096365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.103406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.103811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.103858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.110931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.111278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.111327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.118329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.118732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.118778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.125826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.126183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.126227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.133007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.133377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.133434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.140630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.141005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.141066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.147950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.148331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.148378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.155354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.155714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.162370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.162718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.162780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.169605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.169925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.169969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.176567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.176950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.176994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.183622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.183962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.184007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.190766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.191094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.191135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.198018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.198385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.198454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.205320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.205681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.205726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.212548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.212900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.212944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.219792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.220136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.220181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.418 [2024-07-12 00:51:15.227078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.418 [2024-07-12 00:51:15.227422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.418 [2024-07-12 00:51:15.227475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.234441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.234777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.234822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.241898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.242278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.242328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.249450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.249783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.249828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.256699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.257126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.264295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.264700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.264748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.271710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.272068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.272113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.278932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.279282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.279328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.286234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.286597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.286641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.293444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.293764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.293810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.300518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.300943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.300989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.307977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.308371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.315246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.315598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.315641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.322653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.323000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.323047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.329998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.330357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.330430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.337309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.337695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.337742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.344622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.344994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.345041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.419 [2024-07-12 00:51:15.351572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.419 [2024-07-12 00:51:15.351931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.419 [2024-07-12 00:51:15.351976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.358732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.359115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.366133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.366466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.373691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.380876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.381207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.381256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.388235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.388598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.388643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.395387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.395781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.395827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.402665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.403008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.403054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.410256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.410693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.410745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.417807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.418241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.418290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.425680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.426071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.426120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.433307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.433760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.433808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.440819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.441286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.441334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.448855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.449300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.449374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.456586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.457023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.457075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.463951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.464318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.471070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.471446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.471505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.478061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.677 [2024-07-12 00:51:15.478441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.677 [2024-07-12 00:51:15.478488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.677 [2024-07-12 00:51:15.485501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.485924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.492612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.493013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.493059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.500463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.500858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.500923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.507689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.508069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.508132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.515308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.516050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.522257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.522555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.522601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.528263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.528497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.528544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.534230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.534484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.534552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.540013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.540285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.545595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.545804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.545835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.551609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.551812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.551856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.557465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.557710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.557754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.563131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.563367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.563441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.569459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.569700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.569749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.575129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.575381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.575446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.581080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.581314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.581363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.587181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.587421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.587455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.593226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.593486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.593522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.599192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.599409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.599456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.605199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.605403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.605437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.678 [2024-07-12 00:51:15.611113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.678 [2024-07-12 00:51:15.611366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.678 [2024-07-12 00:51:15.611401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.617070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.617271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.617328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.623007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.623242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.623285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.629048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.629294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.629326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.634998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.635258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.635307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.640938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.641184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.641234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.647005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.647228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.647291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.652988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.653241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.653274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.659121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.659354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.659387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.665342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.665607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.665671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.671285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.671513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.671555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.677140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.677416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.683006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.683212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.683255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.688716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.688923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.688967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.694501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.694723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.694779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.700318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.700576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.700620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.706108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.706356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.706409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.712043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.712252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.712291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.718025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.718246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.718281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.723937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.724191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.724238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.729976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.730197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.735829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.736045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.736092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.741710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.742005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.747625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.747831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.937 [2024-07-12 00:51:15.747884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.937 [2024-07-12 00:51:15.753606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.937 [2024-07-12 00:51:15.753825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.753858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.759483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.759730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.765373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.765607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.765640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.771266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.771486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.771519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.777179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.777401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.777434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.783036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.783254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.783288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.789037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.789257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.789289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.794865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.795089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.795121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.800751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.800967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.801021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.806666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.806866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.812547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.812777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.812821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.818521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.818743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.818776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.824374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.824645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.830236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.830788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.836578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.836818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.836858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.842511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.843122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.843172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.848344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.848490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.854290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.854488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.854532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.860066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.860237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.860285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.938 [2024-07-12 00:51:15.866506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:10.938 [2024-07-12 00:51:15.866817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.938 [2024-07-12 00:51:15.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.872431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.872600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.878310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.878576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.878626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.884288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.884404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.884454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.890194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.890347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.890384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.896241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.896373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.896421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.901990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.902197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.902289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.908018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.908280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.908328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.914547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.914761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.914795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.196 [2024-07-12 00:51:15.920386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.196 [2024-07-12 00:51:15.920591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.196 [2024-07-12 00:51:15.920629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.926236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.926407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.926454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.932113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.932380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.932445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.937893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.938040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.938073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.943959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.944211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.944247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.949906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.950076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.950115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.955771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.955940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.955989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.961601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.961769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.961810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.967477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.967699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.967733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.973498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.973685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.973719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.979079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.979269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.979313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.985174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.985364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.985425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.991264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.991491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:15.996918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:15.997109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:15.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.003217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.003456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.003489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.008821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.009052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.015003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.015223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.015257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.020614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.020782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.020830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.026322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.026544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.026583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.032055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.032230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.032270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.039509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.039879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.039918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.045249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.045931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.045974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.051532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.051775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.051822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.057199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.057383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.057429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.062978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.063162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.063201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.070979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.071099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.071132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.078897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.079024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.079058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.085844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.086027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.086068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.092803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.092922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.092967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.101516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.101647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.101678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.108419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.108576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.108610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.115719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.115876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.122240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.122383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.122435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.197 [2024-07-12 00:51:16.129018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.197 [2024-07-12 00:51:16.129161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.197 [2024-07-12 00:51:16.129192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.136817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.137003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.137048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.143355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.143546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.149841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.149998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.150050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.156105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.156251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.156291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.164350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.164546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.164601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.170909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.171031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.177527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.177658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.177704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.184072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.184252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.184291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.190682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.190823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.190854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.199991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.200144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.200200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.206457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.455 [2024-07-12 00:51:16.206629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.455 [2024-07-12 00:51:16.206661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.455 [2024-07-12 00:51:16.213273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.213429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.213463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.223668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.223780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.223815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.230367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.230539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.237265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.237420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.237476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.244253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.244371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.244423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.251071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.251179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.251212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.257596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.257794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.257849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.263934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.264075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.264106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.270471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.270607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.270637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.276728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.276872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.276903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.282954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.283070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.283101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.289300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.289489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.289522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.295810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.295962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.296008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.302577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.302752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.302788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.308728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.308893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.314680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.314791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.320648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.320771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.320805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.326463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.326592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.326627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.341287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.341400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.341478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.347014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.347143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.353751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.353863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.353894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.359279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.359443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.359490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.364977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.365091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.365120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.370655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.370824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.376405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.376542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.376576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.382204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.382347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.456 [2024-07-12 00:51:16.387980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.456 [2024-07-12 00:51:16.388112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.456 [2024-07-12 00:51:16.388146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.394051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.394169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.394204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.399882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.400039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.405616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.411351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.411496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.411536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.417184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.417303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.417337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.422992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.423106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.428823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.428980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.442120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.442255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.442292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.450651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.450783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.450826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.460797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.462312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.462383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.468846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.469048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.469092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.474733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.474928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.474970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.480526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.480735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.486365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.486578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.486623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.492203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.492412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.492452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.498096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.498309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.498350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.503974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.504219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.509855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.510059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.510101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.515711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.515905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.515947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.521520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.521726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.521767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.527272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.527477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.527512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.533111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.533316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.533357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.538963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.539156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.539197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.544766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.544958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.545002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.550530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.550735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.550774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.556245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.556461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.556502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.562028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.716 [2024-07-12 00:51:16.562234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.716 [2024-07-12 00:51:16.562273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.716 [2024-07-12 00:51:16.567885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.568079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.568121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.573750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.573944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.573984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.579728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.579926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.579965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.585578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.585772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.585811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.591378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.591606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.591645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.597319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.597533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.597580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.603282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.603549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.609202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.609422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.609460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.615090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.615296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.615336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.621204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.621432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.621472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.627148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.627354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.633258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.633480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.633520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.639209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.639428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.639461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.717 [2024-07-12 00:51:16.645177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.717 [2024-07-12 00:51:16.645410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.717 [2024-07-12 00:51:16.645458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.651195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.651404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.651444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.657120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.657315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.663139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.663330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.663371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.669175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.669385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.669440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.675122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.675315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.675355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.681071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.681294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.681335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.687026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.687223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.693037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.693242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.699095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.699307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.699348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.705018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.705225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.705265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.710988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.711183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.976 [2024-07-12 00:51:16.711223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.976 [2024-07-12 00:51:16.716916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.976 [2024-07-12 00:51:16.717124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.717164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.722715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.722907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.722948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.728655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.728884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.728929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.734705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.734906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.734946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.740684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.740932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.746543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.746742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.746784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.752470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.752693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.752733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.758310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.758532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.758568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.764214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.764423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.764464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.770148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.770360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.770415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.776098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.776318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.776360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.782093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.782347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.788005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.788215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.788256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.794049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.794247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.794289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.799933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.800145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.800189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.805908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.806117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.806161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.811842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.812057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.812102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.818360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.818874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.818959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.825133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.825415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.825460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.831708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.832072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.832131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.837962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.838204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.838254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.843966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.844195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.844243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.849869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.850076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.850115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.977 [2024-07-12 00:51:16.855555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:32:11.977 [2024-07-12 00:51:16.855673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.977 [2024-07-12 00:51:16.855708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.977 00:32:11.977 Latency(us) 00:32:11.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.977 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:11.977 nvme0n1 : 2.00 4737.20 592.15 0.00 0.00 3367.63 1772.45 14537.08 00:32:11.977 =================================================================================================================== 00:32:11.977 Total : 4737.20 592.15 0.00 0.00 3367.63 1772.45 14537.08 00:32:11.977 0 00:32:11.977 00:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:11.977 00:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:11.977 | .driver_specific 00:32:11.977 | .nvme_error 00:32:11.977 | .status_code 00:32:11.977 | .command_transient_transport_error' 00:32:11.977 00:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:11.977 00:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 306 > 0 )) 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 105057 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 105057 ']' 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 105057 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105057 00:32:12.542 killing process with pid 105057 00:32:12.542 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.542 00:32:12.542 Latency(us) 00:32:12.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.542 =================================================================================================================== 00:32:12.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105057' 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 105057 00:32:12.542 00:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 105057 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 104725 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 104725 ']' 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 104725 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104725 00:32:13.915 killing process with pid 104725 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104725' 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 104725 00:32:13.915 00:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 104725 00:32:15.333 00:32:15.333 real 0m23.833s 00:32:15.333 user 0m44.609s 00:32:15.333 sys 0m5.179s 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:15.333 ************************************ 00:32:15.333 END TEST nvmf_digest_error 00:32:15.333 ************************************ 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:15.333 rmmod nvme_tcp 00:32:15.333 rmmod nvme_fabrics 00:32:15.333 rmmod nvme_keyring 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 104725 ']' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 104725 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 104725 ']' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 104725 00:32:15.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (104725) - No such process 00:32:15.333 Process with pid 104725 is not found 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 104725 is not found' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:15.333 00:51:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.334 00:51:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:15.334 ************************************ 00:32:15.334 END TEST nvmf_digest 00:32:15.334 ************************************ 00:32:15.334 00:32:15.334 real 0m50.108s 00:32:15.334 user 1m32.119s 00:32:15.334 sys 0m11.031s 00:32:15.334 00:51:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.334 00:51:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:15.334 00:51:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:15.334 00:51:20 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:32:15.334 00:51:20 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:32:15.334 00:51:20 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:15.334 00:51:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:15.334 00:51:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.334 00:51:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:15.334 ************************************ 00:32:15.334 START TEST nvmf_mdns_discovery 00:32:15.334 ************************************ 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:15.334 * Looking for test storage... 00:32:15.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:15.334 Cannot find device "nvmf_tgt_br" 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:15.334 Cannot find device "nvmf_tgt_br2" 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:15.334 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:15.594 Cannot find device "nvmf_tgt_br" 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:15.594 Cannot find device "nvmf_tgt_br2" 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:15.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:15.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:15.594 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:15.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:32:15.852 00:32:15.852 --- 10.0.0.2 ping statistics --- 00:32:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.852 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:15.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:15.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:32:15.852 00:32:15.852 --- 10.0.0.3 ping statistics --- 00:32:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.852 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:15.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:32:15.852 00:32:15.852 --- 10.0.0.1 ping statistics --- 00:32:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.852 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=105373 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 105373 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 105373 ']' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.852 00:51:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.852 [2024-07-12 00:51:20.729012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:15.852 [2024-07-12 00:51:20.729253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.112 [2024-07-12 00:51:20.918676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.370 [2024-07-12 00:51:21.262209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.371 [2024-07-12 00:51:21.262283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.371 [2024-07-12 00:51:21.262300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.371 [2024-07-12 00:51:21.262315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.371 [2024-07-12 00:51:21.262327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.371 [2024-07-12 00:51:21.262373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.936 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.937 00:51:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.194 [2024-07-12 00:51:22.105238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.194 [2024-07-12 00:51:22.117407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.194 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 null0 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 null1 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 null2 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 null3 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=105423 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 105423 /tmp/host.sock 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 105423 ']' 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.453 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.453 00:51:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.453 [2024-07-12 00:51:22.268648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:17.453 [2024-07-12 00:51:22.268805] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105423 ] 00:32:17.711 [2024-07-12 00:51:22.436635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.970 [2024-07-12 00:51:22.791555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=105453 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:32:18.587 00:51:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:32:18.587 Process 991 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:32:18.587 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:32:18.587 Successfully dropped root privileges. 00:32:18.587 avahi-daemon 0.8 starting up. 00:32:18.587 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:32:18.587 Successfully called chroot(). 00:32:18.587 Successfully dropped remaining capabilities. 00:32:18.587 No service file found in /etc/avahi/services. 00:32:19.522 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:32:19.522 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:32:19.522 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:32:19.522 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:32:19.522 Network interface enumeration completed. 00:32:19.522 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:32:19.522 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:32:19.522 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:32:19.522 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:32:19.522 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2236148854. 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:19.522 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:19.781 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.782 [2024-07-12 00:51:24.697358] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:19.782 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.040 [2024-07-12 00:51:24.770824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.040 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.041 [2024-07-12 00:51:24.810775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.041 [2024-07-12 00:51:24.818697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.041 00:51:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:32:20.975 [2024-07-12 00:51:25.597428] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:32:21.541 [2024-07-12 00:51:26.197380] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:21.541 [2024-07-12 00:51:26.197478] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:21.541 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:21.541 cookie is 0 00:32:21.541 is_local: 1 00:32:21.541 our_own: 0 00:32:21.541 wide_area: 0 00:32:21.541 multicast: 1 00:32:21.541 cached: 1 00:32:21.541 [2024-07-12 00:51:26.297419] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:21.541 [2024-07-12 00:51:26.297517] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:21.541 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:21.541 cookie is 0 00:32:21.541 is_local: 1 00:32:21.541 our_own: 0 00:32:21.541 wide_area: 0 00:32:21.541 multicast: 1 00:32:21.541 cached: 1 00:32:21.541 [2024-07-12 00:51:26.297561] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:32:21.541 [2024-07-12 00:51:26.397454] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:21.541 [2024-07-12 00:51:26.397528] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:21.541 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:21.541 cookie is 0 00:32:21.541 is_local: 1 00:32:21.541 our_own: 0 00:32:21.541 wide_area: 0 00:32:21.541 multicast: 1 00:32:21.541 cached: 1 00:32:21.799 [2024-07-12 00:51:26.497442] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:21.799 [2024-07-12 00:51:26.497509] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:21.799 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:21.799 cookie is 0 00:32:21.799 is_local: 1 00:32:21.799 our_own: 0 00:32:21.799 wide_area: 0 00:32:21.799 multicast: 1 00:32:21.799 cached: 1 00:32:21.799 [2024-07-12 00:51:26.497544] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:32:22.364 [2024-07-12 00:51:27.204916] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:32:22.364 [2024-07-12 00:51:27.204983] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:32:22.364 [2024-07-12 00:51:27.205026] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:22.364 [2024-07-12 00:51:27.291198] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:32:22.622 [2024-07-12 00:51:27.356752] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:22.622 [2024-07-12 00:51:27.356809] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:22.622 [2024-07-12 00:51:27.405232] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:22.622 [2024-07-12 00:51:27.405293] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:22.622 [2024-07-12 00:51:27.405350] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:22.622 [2024-07-12 00:51:27.492523] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:32:22.622 [2024-07-12 00:51:27.556638] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:22.622 [2024-07-12 00:51:27.556700] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:25.154 00:51:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:25.154 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.412 00:51:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:26.345 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.604 [2024-07-12 00:51:31.363963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:26.604 [2024-07-12 00:51:31.364381] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:26.604 [2024-07-12 00:51:31.364446] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:26.604 [2024-07-12 00:51:31.364507] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:26.604 [2024-07-12 00:51:31.364565] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.604 [2024-07-12 00:51:31.371839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:32:26.604 [2024-07-12 00:51:31.372359] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:26.604 [2024-07-12 00:51:31.372508] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.604 00:51:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:32:26.604 [2024-07-12 00:51:31.504715] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:32:26.604 [2024-07-12 00:51:31.505663] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:32:26.862 [2024-07-12 00:51:31.569452] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:26.862 [2024-07-12 00:51:31.569579] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:26.862 [2024-07-12 00:51:31.569602] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:26.862 [2024-07-12 00:51:31.569640] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:26.862 [2024-07-12 00:51:31.570133] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:26.862 [2024-07-12 00:51:31.570171] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:26.862 [2024-07-12 00:51:31.570182] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:26.862 [2024-07-12 00:51:31.570217] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:26.862 [2024-07-12 00:51:31.614877] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:26.862 [2024-07-12 00:51:31.614913] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:26.862 [2024-07-12 00:51:31.615864] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:26.862 [2024-07-12 00:51:31.615894] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:27.456 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.732 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.733 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.992 [2024-07-12 00:51:32.689580] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:27.992 [2024-07-12 00:51:32.689642] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:27.992 [2024-07-12 00:51:32.689701] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:27.992 [2024-07-12 00:51:32.689728] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.992 [2024-07-12 00:51:32.695877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.695933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.695955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.695971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.695986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.695999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.696015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.696029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.696043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.992 [2024-07-12 00:51:32.701579] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:27.992 [2024-07-12 00:51:32.701666] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:27.992 [2024-07-12 00:51:32.703237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.703283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.703303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.703317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.703333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.703346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.703361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.992 [2024-07-12 00:51:32.703375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.992 [2024-07-12 00:51:32.703388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.992 [2024-07-12 00:51:32.705828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.992 00:51:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:32:27.992 [2024-07-12 00:51:32.713190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.992 [2024-07-12 00:51:32.715855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.992 [2024-07-12 00:51:32.716013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.992 [2024-07-12 00:51:32.716053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.992 [2024-07-12 00:51:32.716071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.992 [2024-07-12 00:51:32.716097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.992 [2024-07-12 00:51:32.716150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.992 [2024-07-12 00:51:32.716172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.992 [2024-07-12 00:51:32.716189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.992 [2024-07-12 00:51:32.716216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.992 [2024-07-12 00:51:32.723208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.992 [2024-07-12 00:51:32.723343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.992 [2024-07-12 00:51:32.723376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.992 [2024-07-12 00:51:32.723407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.992 [2024-07-12 00:51:32.723442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.723464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.723478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.723492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.723516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.725943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.726082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.726112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.726128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.726152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.726173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.726191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.726203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.726226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.733300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.733441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.733471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.733488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.733512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.733533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.733546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.733558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.733581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.736045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.736183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.736220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.736238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.736262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.736283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.736297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.736310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.736333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.743384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.743516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.743546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.743562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.743586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.743618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.743631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.743645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.743690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.746145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.746273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.746304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.746320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.746344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.746367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.746380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.746410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.746438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.753479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.753601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.753631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.753648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.753673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.753720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.753738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.753751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.753773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.756224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.756337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.756372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.756388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.756426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.756449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.756462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.756475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.756498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.763564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.763735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.763768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.763784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.763808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.763853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.763870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.763884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.763907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.766303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.766435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.766465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.766481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.766506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.766527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.766541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.766558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.766581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.773682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.773793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.773822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.773839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.773862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.773914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.773931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.773945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.773967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.776379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.776512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.776555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.776572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.776597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.776619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.776632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.776645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.776674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.783757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.783872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.783901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.783917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.783941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.783986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.784002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.784015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.784037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.786470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.786592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.786623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.786639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.786662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.786683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.786696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.786725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.786749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.793840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.793971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.794001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.794028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.794052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.794100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.794117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.794130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.794153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.796555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.796665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.796694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.796711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.796734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.796760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.796773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.796786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.796809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.803926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.804055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.804086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.993 [2024-07-12 00:51:32.804102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.804126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.804172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.804189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.804202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.993 [2024-07-12 00:51:32.804225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.806632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.993 [2024-07-12 00:51:32.806747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.993 [2024-07-12 00:51:32.806776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.993 [2024-07-12 00:51:32.806792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.993 [2024-07-12 00:51:32.806815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.993 [2024-07-12 00:51:32.806837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.993 [2024-07-12 00:51:32.806850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.993 [2024-07-12 00:51:32.806863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.993 [2024-07-12 00:51:32.806885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.993 [2024-07-12 00:51:32.814021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.993 [2024-07-12 00:51:32.814159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.994 [2024-07-12 00:51:32.814188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.994 [2024-07-12 00:51:32.814204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.994 [2024-07-12 00:51:32.814228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.994 [2024-07-12 00:51:32.814274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.994 [2024-07-12 00:51:32.814290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.994 [2024-07-12 00:51:32.814303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.994 [2024-07-12 00:51:32.814326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.994 [2024-07-12 00:51:32.816712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.994 [2024-07-12 00:51:32.816835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.994 [2024-07-12 00:51:32.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.994 [2024-07-12 00:51:32.816881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.994 [2024-07-12 00:51:32.816908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.994 [2024-07-12 00:51:32.816943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.994 [2024-07-12 00:51:32.816960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.994 [2024-07-12 00:51:32.816973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.994 [2024-07-12 00:51:32.816998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.994 [2024-07-12 00:51:32.824128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:27.994 [2024-07-12 00:51:32.824240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.994 [2024-07-12 00:51:32.824269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:32:27.994 [2024-07-12 00:51:32.824285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:32:27.994 [2024-07-12 00:51:32.824309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:32:27.994 [2024-07-12 00:51:32.824353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:27.994 [2024-07-12 00:51:32.824370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:27.994 [2024-07-12 00:51:32.824383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:27.994 [2024-07-12 00:51:32.824419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.994 [2024-07-12 00:51:32.826801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:27.994 [2024-07-12 00:51:32.826921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.994 [2024-07-12 00:51:32.826951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.2, port=4420 00:32:27.994 [2024-07-12 00:51:32.826968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(5) to be set 00:32:27.994 [2024-07-12 00:51:32.826991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:32:27.994 [2024-07-12 00:51:32.827012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.994 [2024-07-12 00:51:32.827027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:27.994 [2024-07-12 00:51:32.827050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.994 [2024-07-12 00:51:32.827072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:27.994 [2024-07-12 00:51:32.833411] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:32:27.994 [2024-07-12 00:51:32.833481] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:27.994 [2024-07-12 00:51:32.833524] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:27.994 [2024-07-12 00:51:32.833595] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:27.994 [2024-07-12 00:51:32.833623] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:27.994 [2024-07-12 00:51:32.833651] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.994 [2024-07-12 00:51:32.921564] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:27.994 [2024-07-12 00:51:32.921675] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:28.927 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:51:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.185 00:51:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:32:29.185 [2024-07-12 00:51:34.097460] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:32:30.115 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.373 [2024-07-12 00:51:35.278820] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:32:30.373 2024/07/12 00:51:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:32:30.373 request: 00:32:30.373 { 00:32:30.373 "method": "bdev_nvme_start_mdns_discovery", 00:32:30.373 "params": { 00:32:30.373 "name": "mdns", 00:32:30.373 "svcname": "_nvme-disc._http", 00:32:30.373 "hostnqn": "nqn.2021-12.io.spdk:test" 00:32:30.373 } 00:32:30.373 } 00:32:30.373 Got JSON-RPC error response 00:32:30.373 GoRPCClient: error on JSON-RPC call 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:30.373 00:51:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:32:30.937 [2024-07-12 00:51:35.867871] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:32:31.194 [2024-07-12 00:51:35.967862] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:32:31.194 [2024-07-12 00:51:36.067905] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:31.194 [2024-07-12 00:51:36.067970] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:31.194 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:31.194 cookie is 0 00:32:31.194 is_local: 1 00:32:31.194 our_own: 0 00:32:31.194 wide_area: 0 00:32:31.194 multicast: 1 00:32:31.194 cached: 1 00:32:31.451 [2024-07-12 00:51:36.167907] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:31.451 [2024-07-12 00:51:36.167982] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:32:31.451 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:31.451 cookie is 0 00:32:31.451 is_local: 1 00:32:31.451 our_own: 0 00:32:31.451 wide_area: 0 00:32:31.451 multicast: 1 00:32:31.451 cached: 1 00:32:31.451 [2024-07-12 00:51:36.168026] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:32:31.451 [2024-07-12 00:51:36.267892] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:32:31.451 [2024-07-12 00:51:36.267951] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:31.451 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:31.451 cookie is 0 00:32:31.451 is_local: 1 00:32:31.451 our_own: 0 00:32:31.451 wide_area: 0 00:32:31.451 multicast: 1 00:32:31.451 cached: 1 00:32:31.451 [2024-07-12 00:51:36.367907] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:32:31.451 [2024-07-12 00:51:36.367977] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:32:31.451 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:32:31.451 cookie is 0 00:32:31.451 is_local: 1 00:32:31.451 our_own: 0 00:32:31.451 wide_area: 0 00:32:31.451 multicast: 1 00:32:31.451 cached: 1 00:32:31.451 [2024-07-12 00:51:36.368008] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:32:32.425 [2024-07-12 00:51:37.077770] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:32:32.426 [2024-07-12 00:51:37.077838] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:32:32.426 [2024-07-12 00:51:37.077908] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:32.426 [2024-07-12 00:51:37.164015] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:32:32.426 [2024-07-12 00:51:37.234805] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:32.426 [2024-07-12 00:51:37.234888] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:32.426 [2024-07-12 00:51:37.277818] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:32.426 [2024-07-12 00:51:37.277873] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:32.426 [2024-07-12 00:51:37.277909] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:32.684 [2024-07-12 00:51:37.364070] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:32:32.684 [2024-07-12 00:51:37.435124] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:32.684 [2024-07-12 00:51:37.435213] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 [2024-07-12 00:51:40.469815] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:32:35.964 2024/07/12 00:51:40 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:32:35.964 request: 00:32:35.964 { 00:32:35.964 "method": "bdev_nvme_start_mdns_discovery", 00:32:35.964 "params": { 00:32:35.964 "name": "cdc", 00:32:35.964 "svcname": "_nvme-disc._tcp", 00:32:35.964 "hostnqn": "nqn.2021-12.io.spdk:test" 00:32:35.964 } 00:32:35.964 } 00:32:35.964 Got JSON-RPC error response 00:32:35.964 GoRPCClient: error on JSON-RPC call 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 105423 00:32:35.964 00:51:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 105423 00:32:35.964 [2024-07-12 00:51:40.881959] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 105453 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:32:36.897 Got SIGTERM, quitting. 00:32:36.897 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:32:36.897 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:32:36.897 avahi-daemon 0.8 exiting. 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.897 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:37.155 rmmod nvme_tcp 00:32:37.155 rmmod nvme_fabrics 00:32:37.155 rmmod nvme_keyring 00:32:37.155 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:37.155 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:32:37.155 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:32:37.155 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 105373 ']' 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 105373 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 105373 ']' 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 105373 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105373 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:37.156 killing process with pid 105373 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105373' 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 105373 00:32:37.156 00:51:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 105373 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:38.529 00:32:38.529 real 0m23.123s 00:32:38.529 user 0m43.543s 00:32:38.529 sys 0m2.573s 00:32:38.529 ************************************ 00:32:38.529 END TEST nvmf_mdns_discovery 00:32:38.529 ************************************ 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.529 00:51:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.529 00:51:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:38.529 00:51:43 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:32:38.529 00:51:43 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:32:38.529 00:51:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.529 00:51:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.529 00:51:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.529 ************************************ 00:32:38.529 START TEST nvmf_host_multipath 00:32:38.529 ************************************ 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:32:38.529 * Looking for test storage... 00:32:38.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.529 00:51:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:38.530 Cannot find device "nvmf_tgt_br" 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:38.530 Cannot find device "nvmf_tgt_br2" 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:38.530 Cannot find device "nvmf_tgt_br" 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:38.530 Cannot find device "nvmf_tgt_br2" 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:32:38.530 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:38.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:38.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:38.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:32:38.787 00:32:38.787 --- 10.0.0.2 ping statistics --- 00:32:38.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.787 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:38.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:38.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:32:38.787 00:32:38.787 --- 10.0.0.3 ping statistics --- 00:32:38.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.787 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:38.787 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:39.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:32:39.044 00:32:39.044 --- 10.0.0.1 ping statistics --- 00:32:39.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.044 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=106024 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 106024 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 106024 ']' 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:39.044 00:51:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:39.044 [2024-07-12 00:51:43.880524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:39.044 [2024-07-12 00:51:43.880762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.301 [2024-07-12 00:51:44.060586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:39.559 [2024-07-12 00:51:44.356378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.559 [2024-07-12 00:51:44.356472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.559 [2024-07-12 00:51:44.356493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.559 [2024-07-12 00:51:44.356512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.559 [2024-07-12 00:51:44.356528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.559 [2024-07-12 00:51:44.357724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.559 [2024-07-12 00:51:44.357724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=106024 00:32:40.126 00:51:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:40.384 [2024-07-12 00:51:45.111789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.384 00:51:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:40.641 Malloc0 00:32:40.641 00:51:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:40.898 00:51:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:41.165 00:51:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.437 [2024-07-12 00:51:46.186262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.437 00:51:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:41.694 [2024-07-12 00:51:46.426535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:41.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=106121 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 106121 /var/tmp/bdevperf.sock 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 106121 ']' 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:41.694 00:51:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:43.064 00:51:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:43.064 00:51:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:32:43.064 00:51:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:43.064 00:51:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:43.322 Nvme0n1 00:32:43.580 00:51:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:43.837 Nvme0n1 00:32:43.837 00:51:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:32:43.837 00:51:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:44.771 00:51:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:32:44.771 00:51:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:45.337 00:51:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:45.595 00:51:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:32:45.595 00:51:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106213 00:32:45.595 00:51:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:45.595 00:51:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:52.192 Attaching 4 probes... 00:32:52.192 @path[10.0.0.2, 4421]: 11821 00:32:52.192 @path[10.0.0.2, 4421]: 12044 00:32:52.192 @path[10.0.0.2, 4421]: 11656 00:32:52.192 @path[10.0.0.2, 4421]: 12115 00:32:52.192 @path[10.0.0.2, 4421]: 11796 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106213 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:52.192 00:51:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:52.450 00:51:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:32:52.450 00:51:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106341 00:32:52.450 00:51:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:32:52.450 00:51:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:59.005 Attaching 4 probes... 00:32:59.005 @path[10.0.0.2, 4420]: 13340 00:32:59.005 @path[10.0.0.2, 4420]: 13504 00:32:59.005 @path[10.0.0.2, 4420]: 12805 00:32:59.005 @path[10.0.0.2, 4420]: 12710 00:32:59.005 @path[10.0.0.2, 4420]: 12452 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106341 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:59.005 00:52:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.263 00:52:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:32:59.263 00:52:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106467 00:32:59.263 00:52:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:59.263 00:52:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:05.907 Attaching 4 probes... 00:33:05.907 @path[10.0.0.2, 4421]: 9029 00:33:05.907 @path[10.0.0.2, 4421]: 12093 00:33:05.907 @path[10.0.0.2, 4421]: 11853 00:33:05.907 @path[10.0.0.2, 4421]: 12517 00:33:05.907 @path[10.0.0.2, 4421]: 11803 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106467 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:05.907 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.166 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:33:06.166 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:06.166 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106598 00:33:06.166 00:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:12.727 00:52:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:33:12.727 00:52:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:12.727 Attaching 4 probes... 00:33:12.727 00:33:12.727 00:33:12.727 00:33:12.727 00:33:12.727 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106598 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:12.727 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.986 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:33:12.986 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106729 00:33:12.986 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:12.986 00:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:19.568 00:52:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:19.568 00:52:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:19.568 Attaching 4 probes... 00:33:19.568 @path[10.0.0.2, 4421]: 11537 00:33:19.568 @path[10.0.0.2, 4421]: 11530 00:33:19.568 @path[10.0.0.2, 4421]: 11674 00:33:19.568 @path[10.0.0.2, 4421]: 11536 00:33:19.568 @path[10.0.0.2, 4421]: 11305 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106729 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:19.568 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:19.568 [2024-07-12 00:52:24.278697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.278989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.568 [2024-07-12 00:52:24.279081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.279999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.569 [2024-07-12 00:52:24.280101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 [2024-07-12 00:52:24.280238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:33:19.570 00:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:33:20.506 00:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:33:20.506 00:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=106854 00:33:20.506 00:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:20.506 00:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:27.065 Attaching 4 probes... 00:33:27.065 @path[10.0.0.2, 4420]: 10629 00:33:27.065 @path[10.0.0.2, 4420]: 10744 00:33:27.065 @path[10.0.0.2, 4420]: 10875 00:33:27.065 @path[10.0.0.2, 4420]: 11082 00:33:27.065 @path[10.0.0.2, 4420]: 10733 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 106854 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:27.065 [2024-07-12 00:52:31.879479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:27.065 00:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:27.322 00:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:33:33.874 00:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:33:33.874 00:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=107044 00:33:33.874 00:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:33:33.874 00:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106024 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:40.529 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:40.529 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:40.529 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:33:40.529 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:40.529 Attaching 4 probes... 00:33:40.529 @path[10.0.0.2, 4421]: 13290 00:33:40.529 @path[10.0.0.2, 4421]: 13318 00:33:40.529 @path[10.0.0.2, 4421]: 12879 00:33:40.529 @path[10.0.0.2, 4421]: 13321 00:33:40.529 @path[10.0.0.2, 4421]: 12883 00:33:40.529 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 107044 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 106121 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 106121 ']' 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 106121 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106121 00:33:40.530 killing process with pid 106121 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106121' 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 106121 00:33:40.530 00:52:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 106121 00:33:40.530 Connection closed with partial response: 00:33:40.530 00:33:40.530 00:33:41.122 00:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 106121 00:33:41.122 00:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:41.122 [2024-07-12 00:51:46.594186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:41.122 [2024-07-12 00:51:46.594563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106121 ] 00:33:41.122 [2024-07-12 00:51:46.775859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.122 [2024-07-12 00:51:47.096342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:41.122 Running I/O for 90 seconds... 00:33:41.122 [2024-07-12 00:51:57.139647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.139750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.139863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.139893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.139927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.139948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.139980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.140890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.122 [2024-07-12 00:51:57.140926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.122 [2024-07-12 00:51:57.141592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.123 [2024-07-12 00:51:57.142025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.142961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.142983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.143926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.143961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.123 [2024-07-12 00:51:57.144329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.123 [2024-07-12 00:51:57.144349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.144380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.124 [2024-07-12 00:51:57.144421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.144455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.124 [2024-07-12 00:51:57.144477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.144524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.124 [2024-07-12 00:51:57.144547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.144596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.124 [2024-07-12 00:51:57.144617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.144657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.124 [2024-07-12 00:51:57.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.145966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.145995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.146999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.147029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.147057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.147089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.147109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.147158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.124 [2024-07-12 00:51:57.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.124 [2024-07-12 00:51:57.147209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.125 [2024-07-12 00:51:57.147258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.125 [2024-07-12 00:51:57.147316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.125 [2024-07-12 00:51:57.147369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.125 [2024-07-12 00:51:57.147434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.147958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.147978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:51:57.148735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:51:57.148756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.125 [2024-07-12 00:52:03.754026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.125 [2024-07-12 00:52:03.754047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.754949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.754972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.126 [2024-07-12 00:52:03.755833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.126 [2024-07-12 00:52:03.755855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.755888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.755909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.755941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.755962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.755992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.756045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.756066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.756097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.756118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.756149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.756170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.756223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.757527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.757955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.757985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.127 [2024-07-12 00:52:03.758673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.127 [2024-07-12 00:52:03.758960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.127 [2024-07-12 00:52:03.758980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.759997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.760471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.760504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.761955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.761976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.762027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.762078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.128 [2024-07-12 00:52:03.762145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.128 [2024-07-12 00:52:03.762202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.128 [2024-07-12 00:52:03.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.128 [2024-07-12 00:52:03.762283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.128 [2024-07-12 00:52:03.762305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.762954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.762989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.763970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.129 [2024-07-12 00:52:03.764547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.129 [2024-07-12 00:52:03.764580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.764635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.764708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.764768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.764829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.764884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.764935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.764966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.764987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.765019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.765040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.765911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.765947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.765987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.130 [2024-07-12 00:52:03.766409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.766958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.766979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.130 [2024-07-12 00:52:03.767275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.130 [2024-07-12 00:52:03.767296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.131 [2024-07-12 00:52:03.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.131 [2024-07-12 00:52:03.767413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.131 [2024-07-12 00:52:03.767468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.131 [2024-07-12 00:52:03.767519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.767570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.767621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.767682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.767736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.767766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.767787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.777952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.777991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.131 [2024-07-12 00:52:03.778857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.131 [2024-07-12 00:52:03.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.778912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.778934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.132 [2024-07-12 00:52:03.780744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.780795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.780847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.780899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.780950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.780980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.781938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.132 [2024-07-12 00:52:03.782576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.132 [2024-07-12 00:52:03.782604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.782646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.782674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.782715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.782743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.782786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.782815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.782857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.782896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.782940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.783955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.783983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.784078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.784148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.784358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.784419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.785724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.785774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.785830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.785862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.785905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.785935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.785995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.133 [2024-07-12 00:52:03.786479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.133 [2024-07-12 00:52:03.786911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.133 [2024-07-12 00:52:03.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.786985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.787948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.787977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.134 [2024-07-12 00:52:03.788046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.788947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.788989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.134 [2024-07-12 00:52:03.789932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.134 [2024-07-12 00:52:03.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.790001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.790029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.790093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.790122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.790192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.790233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.790261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.790304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.790333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.791687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.791754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.791813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.791845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.791887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.791979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.135 [2024-07-12 00:52:03.792726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.792786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.792840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.792891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.792942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.792972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.792992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.135 [2024-07-12 00:52:03.793528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.135 [2024-07-12 00:52:03.793558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.793964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.793984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.794970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.794991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.795040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.795091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.795142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.136 [2024-07-12 00:52:03.795211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.795262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.795313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.795373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.795422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.136 [2024-07-12 00:52:03.796642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.136 [2024-07-12 00:52:03.796662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.796713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.796764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.796864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.796915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.796957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.796980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.797965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.797995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.137 [2024-07-12 00:52:03.798015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.137 [2024-07-12 00:52:03.798779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.137 [2024-07-12 00:52:03.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.798828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.798857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.798888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.798910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.798940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.798960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.798989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.799593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.799614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.800974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.800995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.138 [2024-07-12 00:52:03.801314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.138 [2024-07-12 00:52:03.801901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.138 [2024-07-12 00:52:03.801931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.801951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.802968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.802997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.139 [2024-07-12 00:52:03.803713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.139 [2024-07-12 00:52:03.803733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.803782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.803803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.803833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.803854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.803884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.803904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.803935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.803957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.804947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.804968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.805025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.805082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.805943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.805964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.140 [2024-07-12 00:52:03.806316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.806445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.806501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.140 [2024-07-12 00:52:03.806558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.140 [2024-07-12 00:52:03.806592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.806959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.806995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.807969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.807991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.808028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.808050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:03.808263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:03.808293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.841958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.842953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.843005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.843026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.843057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.141 [2024-07-12 00:52:10.843079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:41.141 [2024-07-12 00:52:10.843153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.843609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.843947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.843980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.142 [2024-07-12 00:52:10.844001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.844969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.844990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.845590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.845612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:41.142 [2024-07-12 00:52:10.846376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.142 [2024-07-12 00:52:10.846430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.846801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.846913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.846948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.847943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.847972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.143 [2024-07-12 00:52:10.848247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.143 [2024-07-12 00:52:10.848586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:41.143 [2024-07-12 00:52:10.848639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.848960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.848997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.849891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.144 [2024-07-12 00:52:10.849950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.849986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:10.850925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:10.850946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:41.144 [2024-07-12 00:52:24.281967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.144 [2024-07-12 00:52:24.282098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.282972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.282989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.145 [2024-07-12 00:52:24.283474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.145 [2024-07-12 00:52:24.283490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.146 [2024-07-12 00:52:24.283731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.283974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.284969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.284985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.146 [2024-07-12 00:52:24.285228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.146 [2024-07-12 00:52:24.285246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.147 [2024-07-12 00:52:24.285795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.285894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.285913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.285936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.285970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21032 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21048 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.147 [2024-07-12 00:52:24.286887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:33:41.147 [2024-07-12 00:52:24.286902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.147 [2024-07-12 00:52:24.286917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.147 [2024-07-12 00:52:24.286929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.286941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.286972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.286988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.287639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.287654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.287667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.287680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.295972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.296055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.296081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.296102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.296126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.296148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.296166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.296185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.296206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.296227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.296243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.296261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.296283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.296304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.148 [2024-07-12 00:52:24.296321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.148 [2024-07-12 00:52:24.296339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:33:41.148 [2024-07-12 00:52:24.296362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.148 [2024-07-12 00:52:24.296384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.149 [2024-07-12 00:52:24.296401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.149 [2024-07-12 00:52:24.296438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:33:41.149 [2024-07-12 00:52:24.296461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.296484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.149 [2024-07-12 00:52:24.296501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.149 [2024-07-12 00:52:24.296518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:33:41.149 [2024-07-12 00:52:24.296539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.296560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.149 [2024-07-12 00:52:24.296607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.149 [2024-07-12 00:52:24.296627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:33:41.149 [2024-07-12 00:52:24.296649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.296671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.149 [2024-07-12 00:52:24.296687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.149 [2024-07-12 00:52:24.296705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:33:41.149 [2024-07-12 00:52:24.296738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.296761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.149 [2024-07-12 00:52:24.296778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.149 [2024-07-12 00:52:24.296795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:33:41.149 [2024-07-12 00:52:24.296816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.297199] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:33:41.149 [2024-07-12 00:52:24.297429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.149 [2024-07-12 00:52:24.297483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.297514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.149 [2024-07-12 00:52:24.297537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.297561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.149 [2024-07-12 00:52:24.297582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.297605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.149 [2024-07-12 00:52:24.297627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.149 [2024-07-12 00:52:24.297649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:33:41.149 [2024-07-12 00:52:24.300073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:41.149 [2024-07-12 00:52:24.300169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:33:41.149 [2024-07-12 00:52:24.300371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.149 [2024-07-12 00:52:24.300447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:33:41.149 [2024-07-12 00:52:24.300476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:33:41.149 [2024-07-12 00:52:24.300521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:33:41.149 [2024-07-12 00:52:24.300598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:41.149 [2024-07-12 00:52:24.300628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:41.149 [2024-07-12 00:52:24.300654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:41.149 [2024-07-12 00:52:24.300702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:41.149 [2024-07-12 00:52:24.300729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:41.149 [2024-07-12 00:52:34.433604] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:41.149 Received shutdown signal, test time was about 55.689282 seconds 00:33:41.149 00:33:41.149 Latency(us) 00:33:41.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.149 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:41.149 Verification LBA range: start 0x0 length 0x4000 00:33:41.149 Nvme0n1 : 55.69 5127.44 20.03 0.00 0.00 24932.61 1340.51 7046430.72 00:33:41.149 =================================================================================================================== 00:33:41.149 Total : 5127.44 20.03 0.00 0.00 24932.61 1340.51 7046430.72 00:33:41.149 00:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.407 rmmod nvme_tcp 00:33:41.407 rmmod nvme_fabrics 00:33:41.407 rmmod nvme_keyring 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 106024 ']' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 106024 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 106024 ']' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 106024 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106024 00:33:41.407 killing process with pid 106024 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106024' 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 106024 00:33:41.407 00:52:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 106024 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:42.781 00:33:42.781 real 1m4.438s 00:33:42.781 user 3m1.978s 00:33:42.781 sys 0m13.191s 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:42.781 00:52:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:42.781 ************************************ 00:33:42.781 END TEST nvmf_host_multipath 00:33:42.781 ************************************ 00:33:43.040 00:52:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:43.040 00:52:47 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:33:43.040 00:52:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:43.040 00:52:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.040 00:52:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.040 ************************************ 00:33:43.040 START TEST nvmf_timeout 00:33:43.040 ************************************ 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:33:43.040 * Looking for test storage... 00:33:43.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:43.040 Cannot find device "nvmf_tgt_br" 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:33:43.040 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:43.040 Cannot find device "nvmf_tgt_br2" 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:43.041 Cannot find device "nvmf_tgt_br" 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:43.041 Cannot find device "nvmf_tgt_br2" 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:33:43.041 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:43.299 00:52:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:43.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:43.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:43.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:33:43.299 00:33:43.299 --- 10.0.0.2 ping statistics --- 00:33:43.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.299 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:43.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:43.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:33:43.299 00:33:43.299 --- 10.0.0.3 ping statistics --- 00:33:43.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.299 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:43.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:33:43.299 00:33:43.299 --- 10.0.0.1 ping statistics --- 00:33:43.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.299 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=107385 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 107385 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107385 ']' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.299 00:52:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:43.558 [2024-07-12 00:52:48.347938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:43.558 [2024-07-12 00:52:48.348119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.816 [2024-07-12 00:52:48.528898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:44.091 [2024-07-12 00:52:48.833296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.091 [2024-07-12 00:52:48.833367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.091 [2024-07-12 00:52:48.833385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.091 [2024-07-12 00:52:48.833415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.091 [2024-07-12 00:52:48.833429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.091 [2024-07-12 00:52:48.833608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.091 [2024-07-12 00:52:48.833627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:44.666 00:52:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:44.925 [2024-07-12 00:52:49.647491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.925 00:52:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:45.183 Malloc0 00:33:45.183 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:45.442 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:45.700 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.958 [2024-07-12 00:52:50.720094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=107476 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 107476 /var/tmp/bdevperf.sock 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107476 ']' 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.958 00:52:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:45.958 [2024-07-12 00:52:50.853925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:45.958 [2024-07-12 00:52:50.854114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107476 ] 00:33:46.216 [2024-07-12 00:52:51.027561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.474 [2024-07-12 00:52:51.306076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.040 00:52:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.040 00:52:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:33:47.040 00:52:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:47.298 00:52:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:33:47.557 NVMe0n1 00:33:47.557 00:52:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=107525 00:33:47.557 00:52:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:47.557 00:52:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:33:47.557 Running I/O for 10 seconds... 00:33:48.490 00:52:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.752 [2024-07-12 00:52:53.552781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.552992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.752 [2024-07-12 00:52:53.553269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.553987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:33:48.753 [2024-07-12 00:52:53.554929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.753 [2024-07-12 00:52:53.554969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.753 [2024-07-12 00:52:53.554989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.753 [2024-07-12 00:52:53.555004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.753 [2024-07-12 00:52:53.555019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.753 [2024-07-12 00:52:53.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.753 [2024-07-12 00:52:53.555051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.753 [2024-07-12 00:52:53.555064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.753 [2024-07-12 00:52:53.555077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:33:48.754 [2024-07-12 00:52:53.555148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.555976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.555992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.754 [2024-07-12 00:52:53.556320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.754 [2024-07-12 00:52:53.556334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.755 [2024-07-12 00:52:53.556968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.556984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.556998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.755 [2024-07-12 00:52:53.557604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.755 [2024-07-12 00:52:53.557624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.557970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.557986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.756 [2024-07-12 00:52:53.558744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.756 [2024-07-12 00:52:53.558758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.558977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.558991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:48.757 [2024-07-12 00:52:53.559188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:48.757 [2024-07-12 00:52:53.559239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:48.757 [2024-07-12 00:52:53.559252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:33:48.757 [2024-07-12 00:52:53.559266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.757 [2024-07-12 00:52:53.559541] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:33:48.757 [2024-07-12 00:52:53.559829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.757 [2024-07-12 00:52:53.559877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:48.757 [2024-07-12 00:52:53.560014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.757 [2024-07-12 00:52:53.560049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:33:48.757 [2024-07-12 00:52:53.560066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:33:48.757 [2024-07-12 00:52:53.560094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:48.757 [2024-07-12 00:52:53.560120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:48.757 [2024-07-12 00:52:53.560142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:48.757 [2024-07-12 00:52:53.560157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:48.757 [2024-07-12 00:52:53.560189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.757 [2024-07-12 00:52:53.560207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:48.757 00:52:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:33:50.688 [2024-07-12 00:52:55.572195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.688 [2024-07-12 00:52:55.572286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:33:50.688 [2024-07-12 00:52:55.572312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:33:50.688 [2024-07-12 00:52:55.572351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:50.688 [2024-07-12 00:52:55.572381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:50.688 [2024-07-12 00:52:55.572412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:50.688 [2024-07-12 00:52:55.572431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.688 [2024-07-12 00:52:55.572475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.688 [2024-07-12 00:52:55.572495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.688 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:33:50.688 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:50.689 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:33:50.946 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:33:50.946 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:33:50.946 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:33:50.946 00:52:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:33:51.203 00:52:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:33:51.203 00:52:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:33:53.117 [2024-07-12 00:52:57.572684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.117 [2024-07-12 00:52:57.572762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:33:53.117 [2024-07-12 00:52:57.572788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:33:53.117 [2024-07-12 00:52:57.572827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:33:53.117 [2024-07-12 00:52:57.572858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.117 [2024-07-12 00:52:57.572874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.117 [2024-07-12 00:52:57.572891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.117 [2024-07-12 00:52:57.572940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.117 [2024-07-12 00:52:57.572962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:55.021 [2024-07-12 00:52:59.573020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.021 [2024-07-12 00:52:59.573175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.021 [2024-07-12 00:52:59.573194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.021 [2024-07-12 00:52:59.573210] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:33:55.021 [2024-07-12 00:52:59.573255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.956 00:33:55.956 Latency(us) 00:33:55.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.956 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:55.956 Verification LBA range: start 0x0 length 0x4000 00:33:55.956 NVMe0n1 : 8.15 1025.56 4.01 15.70 0.00 122769.66 3053.38 7046430.72 00:33:55.956 =================================================================================================================== 00:33:55.956 Total : 1025.56 4.01 15.70 0.00 122769.66 3053.38 7046430.72 00:33:55.956 0 00:33:56.215 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:33:56.215 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:56.215 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 107525 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 107476 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107476 ']' 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107476 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.783 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107476 00:33:57.042 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:57.042 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:57.042 killing process with pid 107476 00:33:57.042 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107476' 00:33:57.042 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107476 00:33:57.042 Received shutdown signal, test time was about 9.323708 seconds 00:33:57.042 00:33:57.042 Latency(us) 00:33:57.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.042 =================================================================================================================== 00:33:57.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:57.042 00:53:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107476 00:33:58.416 00:53:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.416 [2024-07-12 00:53:03.175508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=107684 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 107684 /var/tmp/bdevperf.sock 00:33:58.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107684 ']' 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:58.416 00:53:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:58.416 [2024-07-12 00:53:03.310308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:58.416 [2024-07-12 00:53:03.310526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107684 ] 00:33:58.673 [2024-07-12 00:53:03.488515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.932 [2024-07-12 00:53:03.734663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.509 00:53:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.509 00:53:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:33:59.509 00:53:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:59.795 00:53:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:33:59.795 NVMe0n1 00:34:00.053 00:53:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=107733 00:34:00.053 00:53:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:00.053 00:53:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:34:00.053 Running I/O for 10 seconds... 00:34:00.987 00:53:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.249 [2024-07-12 00:53:06.019705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.019991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.249 [2024-07-12 00:53:06.020172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.020993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.250 [2024-07-12 00:53:06.021239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.251 [2024-07-12 00:53:06.021251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:34:01.251 [2024-07-12 00:53:06.022265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.022981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.022998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.251 [2024-07-12 00:53:06.023612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.251 [2024-07-12 00:53:06.023625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.023972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.023988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.252 [2024-07-12 00:53:06.024240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.252 [2024-07-12 00:53:06.024956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.252 [2024-07-12 00:53:06.024972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.024985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.025977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.025993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.026007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.026023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.026037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.253 [2024-07-12 00:53:06.026052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.253 [2024-07-12 00:53:06.026066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.254 [2024-07-12 00:53:06.026273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:34:01.254 [2024-07-12 00:53:06.026313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:01.254 [2024-07-12 00:53:06.026328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:01.254 [2024-07-12 00:53:06.026343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66776 len:8 PRP1 0x0 PRP2 0x0 00:34:01.254 [2024-07-12 00:53:06.026363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.254 [2024-07-12 00:53:06.026633] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:34:01.254 [2024-07-12 00:53:06.026937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.254 [2024-07-12 00:53:06.027054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:01.254 [2024-07-12 00:53:06.027190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.254 [2024-07-12 00:53:06.027221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:01.254 [2024-07-12 00:53:06.027238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:01.254 [2024-07-12 00:53:06.027265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:01.254 [2024-07-12 00:53:06.027292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.254 [2024-07-12 00:53:06.027307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.254 [2024-07-12 00:53:06.027322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.254 [2024-07-12 00:53:06.027361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.254 [2024-07-12 00:53:06.027381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.254 00:53:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:34:02.189 [2024-07-12 00:53:07.027573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.189 [2024-07-12 00:53:07.027648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:02.189 [2024-07-12 00:53:07.027672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:02.189 [2024-07-12 00:53:07.027711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:02.189 [2024-07-12 00:53:07.027740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.189 [2024-07-12 00:53:07.027757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.189 [2024-07-12 00:53:07.027773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.189 [2024-07-12 00:53:07.027815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.189 [2024-07-12 00:53:07.027836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.189 00:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.447 [2024-07-12 00:53:07.308872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.447 00:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 107733 00:34:03.381 [2024-07-12 00:53:08.043450] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:11.500 00:34:11.500 Latency(us) 00:34:11.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.500 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:11.500 Verification LBA range: start 0x0 length 0x4000 00:34:11.500 NVMe0n1 : 10.01 4953.61 19.35 0.00 0.00 25788.88 2681.02 3035150.89 00:34:11.500 =================================================================================================================== 00:34:11.500 Total : 4953.61 19.35 0.00 0.00 25788.88 2681.02 3035150.89 00:34:11.500 0 00:34:11.500 00:53:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=107845 00:34:11.500 00:53:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:11.500 00:53:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:34:11.500 Running I/O for 10 seconds... 00:34:11.500 00:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.500 [2024-07-12 00:53:16.179255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.500 [2024-07-12 00:53:16.179572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.179921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:11.501 [2024-07-12 00:53:16.180794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.180860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.180905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.180941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.180956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.180972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.180986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.501 [2024-07-12 00:53:16.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.501 [2024-07-12 00:53:16.181726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.181976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.181990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.502 [2024-07-12 00:53:16.182883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.182979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.183009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.183038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.183054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.183068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.183084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.502 [2024-07-12 00:53:16.183098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.502 [2024-07-12 00:53:16.183123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.503 [2024-07-12 00:53:16.183856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.183968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.183996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.503 [2024-07-12 00:53:16.184468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.503 [2024-07-12 00:53:16.184486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.504 [2024-07-12 00:53:16.184897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.184938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:11.504 [2024-07-12 00:53:16.184959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:34:11.504 [2024-07-12 00:53:16.184974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:11.504 [2024-07-12 00:53:16.185015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:11.504 [2024-07-12 00:53:16.185033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67200 len:8 PRP1 0x0 PRP2 0x0 00:34:11.504 [2024-07-12 00:53:16.185047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185303] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:34:11.504 [2024-07-12 00:53:16.185455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.504 [2024-07-12 00:53:16.185489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.504 [2024-07-12 00:53:16.185521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.504 [2024-07-12 00:53:16.185550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.504 [2024-07-12 00:53:16.185578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.504 [2024-07-12 00:53:16.185591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:11.504 [2024-07-12 00:53:16.185832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.504 [2024-07-12 00:53:16.185887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:11.504 [2024-07-12 00:53:16.186030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.504 [2024-07-12 00:53:16.186072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:11.504 [2024-07-12 00:53:16.186089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:11.504 [2024-07-12 00:53:16.186118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:11.504 [2024-07-12 00:53:16.186142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.504 [2024-07-12 00:53:16.186156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.504 [2024-07-12 00:53:16.186171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.504 [2024-07-12 00:53:16.186202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.504 [2024-07-12 00:53:16.201174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.504 00:53:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:34:12.439 [2024-07-12 00:53:17.201455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.439 [2024-07-12 00:53:17.201844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:12.439 [2024-07-12 00:53:17.202009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:12.439 [2024-07-12 00:53:17.202221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:12.439 [2024-07-12 00:53:17.202458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:12.439 [2024-07-12 00:53:17.202707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:12.439 [2024-07-12 00:53:17.202847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:12.439 [2024-07-12 00:53:17.202936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:12.439 [2024-07-12 00:53:17.203081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.376 [2024-07-12 00:53:18.203463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.376 [2024-07-12 00:53:18.203784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:13.376 [2024-07-12 00:53:18.203818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:13.376 [2024-07-12 00:53:18.203868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:13.376 [2024-07-12 00:53:18.203898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.376 [2024-07-12 00:53:18.203916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.376 [2024-07-12 00:53:18.203935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.376 [2024-07-12 00:53:18.203980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.376 [2024-07-12 00:53:18.204001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.309 [2024-07-12 00:53:19.204739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.310 [2024-07-12 00:53:19.204849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:14.310 [2024-07-12 00:53:19.204874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:14.310 [2024-07-12 00:53:19.205165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:14.310 [2024-07-12 00:53:19.205461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.310 [2024-07-12 00:53:19.205494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.310 [2024-07-12 00:53:19.205514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.310 00:53:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.310 [2024-07-12 00:53:19.209631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.310 [2024-07-12 00:53:19.209669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.568 [2024-07-12 00:53:19.481851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.826 00:53:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 107845 00:34:15.401 [2024-07-12 00:53:20.244350] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:20.687 00:34:20.687 Latency(us) 00:34:20.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.687 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:20.687 Verification LBA range: start 0x0 length 0x4000 00:34:20.687 NVMe0n1 : 10.02 4168.84 16.28 3429.97 0.00 16808.02 1064.96 3019898.88 00:34:20.687 =================================================================================================================== 00:34:20.687 Total : 4168.84 16.28 3429.97 0.00 16808.02 0.00 3019898.88 00:34:20.687 0 00:34:20.687 00:53:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 107684 00:34:20.687 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107684 ']' 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107684 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107684 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107684' 00:34:20.688 killing process with pid 107684 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107684 00:34:20.688 Received shutdown signal, test time was about 10.000000 seconds 00:34:20.688 00:34:20.688 Latency(us) 00:34:20.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.688 =================================================================================================================== 00:34:20.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:20.688 00:53:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107684 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=107973 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 107973 /var/tmp/bdevperf.sock 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 107973 ']' 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:21.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.623 00:53:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:21.623 [2024-07-12 00:53:26.365139] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:21.623 [2024-07-12 00:53:26.365415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107973 ] 00:34:21.623 [2024-07-12 00:53:26.551905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.881 [2024-07-12 00:53:26.793465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.448 00:53:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.448 00:53:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:34:22.448 00:53:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=108000 00:34:22.448 00:53:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 107973 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:34:22.448 00:53:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:34:23.015 00:53:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:23.275 NVMe0n1 00:34:23.275 00:53:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=108055 00:34:23.275 00:53:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:23.275 00:53:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:34:23.275 Running I/O for 10 seconds... 00:34:24.209 00:53:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.469 [2024-07-12 00:53:29.343467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.469 [2024-07-12 00:53:29.343624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.469 [2024-07-12 00:53:29.343642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.469 [2024-07-12 00:53:29.343654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.343953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.344315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:34:24.470 [2024-07-12 00:53:29.345547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.345969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.345986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.470 [2024-07-12 00:53:29.346480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.470 [2024-07-12 00:53:29.346514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.346984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.347961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.347975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.348008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.348024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.348044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.348060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.348078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.471 [2024-07-12 00:53:29.348094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.471 [2024-07-12 00:53:29.348126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.348949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.348965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.472 [2024-07-12 00:53:29.349951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.472 [2024-07-12 00:53:29.349969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.350966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.473 [2024-07-12 00:53:29.350984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:34:24.473 [2024-07-12 00:53:29.351041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:24.473 [2024-07-12 00:53:29.351056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:24.473 [2024-07-12 00:53:29.351081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119264 len:8 PRP1 0x0 PRP2 0x0 00:34:24.473 [2024-07-12 00:53:29.351097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351459] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:34:24.473 [2024-07-12 00:53:29.351696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.473 [2024-07-12 00:53:29.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.473 [2024-07-12 00:53:29.351770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.473 [2024-07-12 00:53:29.351802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.473 [2024-07-12 00:53:29.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.473 [2024-07-12 00:53:29.351879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:24.473 [2024-07-12 00:53:29.352206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.473 [2024-07-12 00:53:29.352272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:24.473 [2024-07-12 00:53:29.352524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.473 [2024-07-12 00:53:29.352556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:24.473 [2024-07-12 00:53:29.352574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:24.473 [2024-07-12 00:53:29.352668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:24.473 [2024-07-12 00:53:29.352707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.473 [2024-07-12 00:53:29.352725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.473 [2024-07-12 00:53:29.352747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.473 [2024-07-12 00:53:29.352785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.473 [2024-07-12 00:53:29.352805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.473 00:53:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 108055 00:34:27.009 [2024-07-12 00:53:31.353078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.009 [2024-07-12 00:53:31.353194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:27.009 [2024-07-12 00:53:31.353227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:27.009 [2024-07-12 00:53:31.353267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:27.009 [2024-07-12 00:53:31.353298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.009 [2024-07-12 00:53:31.353315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.009 [2024-07-12 00:53:31.353344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.009 [2024-07-12 00:53:31.353420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.009 [2024-07-12 00:53:31.353467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.427 [2024-07-12 00:53:33.353777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.427 [2024-07-12 00:53:33.353885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:34:28.427 [2024-07-12 00:53:33.353920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:34:28.427 [2024-07-12 00:53:33.353960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:34:28.427 [2024-07-12 00:53:33.354007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.427 [2024-07-12 00:53:33.354026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.427 [2024-07-12 00:53:33.354044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.427 [2024-07-12 00:53:33.354098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.427 [2024-07-12 00:53:33.354129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.958 [2024-07-12 00:53:35.354225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.958 [2024-07-12 00:53:35.354365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.958 [2024-07-12 00:53:35.354402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.958 [2024-07-12 00:53:35.354420] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:30.958 [2024-07-12 00:53:35.354480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:31.525 00:34:31.525 Latency(us) 00:34:31.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.525 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:34:31.525 NVMe0n1 : 8.20 1901.69 7.43 15.62 0.00 66761.82 5183.30 7046430.72 00:34:31.525 =================================================================================================================== 00:34:31.525 Total : 1901.69 7.43 15.62 0.00 66761.82 5183.30 7046430.72 00:34:31.525 0 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:31.525 Attaching 5 probes... 00:34:31.525 1378.268233: reset bdev controller NVMe0 00:34:31.525 1378.444372: reconnect bdev controller NVMe0 00:34:31.525 3378.923352: reconnect delay bdev controller NVMe0 00:34:31.525 3378.951270: reconnect bdev controller NVMe0 00:34:31.525 5379.609262: reconnect delay bdev controller NVMe0 00:34:31.525 5379.649361: reconnect bdev controller NVMe0 00:34:31.525 7380.251161: reconnect delay bdev controller NVMe0 00:34:31.525 7380.273992: reconnect bdev controller NVMe0 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 108000 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 107973 00:34:31.525 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107973 ']' 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107973 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107973 00:34:31.526 killing process with pid 107973 00:34:31.526 Received shutdown signal, test time was about 8.259578 seconds 00:34:31.526 00:34:31.526 Latency(us) 00:34:31.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.526 =================================================================================================================== 00:34:31.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107973' 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107973 00:34:31.526 00:53:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107973 00:34:32.902 00:53:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.160 00:53:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:34:33.160 00:53:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:34:33.160 00:53:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.160 00:53:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.160 rmmod nvme_tcp 00:34:33.160 rmmod nvme_fabrics 00:34:33.160 rmmod nvme_keyring 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 107385 ']' 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 107385 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 107385 ']' 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 107385 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:33.160 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107385 00:34:33.419 killing process with pid 107385 00:34:33.419 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:33.419 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:33.419 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107385' 00:34:33.419 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 107385 00:34:33.419 00:53:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 107385 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.793 00:53:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:35.051 00:34:35.051 real 0m52.007s 00:34:35.051 user 2m30.895s 00:34:35.051 sys 0m5.412s 00:34:35.051 00:53:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:35.051 00:53:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:35.051 ************************************ 00:34:35.051 END TEST nvmf_timeout 00:34:35.051 ************************************ 00:34:35.051 00:53:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:35.051 00:53:39 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:34:35.051 00:53:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:35.052 00:53:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:35.052 00:53:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 00:53:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:35.052 00:34:35.052 real 25m45.198s 00:34:35.052 user 75m48.909s 00:34:35.052 sys 4m47.261s 00:34:35.052 00:53:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:35.052 ************************************ 00:34:35.052 00:53:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 END TEST nvmf_tcp 00:34:35.052 ************************************ 00:34:35.052 00:53:39 -- common/autotest_common.sh@1142 -- # return 0 00:34:35.052 00:53:39 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:35.052 00:53:39 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.052 00:53:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:35.052 00:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.052 00:53:39 -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 ************************************ 00:34:35.052 START TEST spdkcli_nvmf_tcp 00:34:35.052 ************************************ 00:34:35.052 00:53:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.310 * Looking for test storage... 00:34:35.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:34:35.310 00:53:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:34:35.310 00:53:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:34:35.310 00:53:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:34:35.310 00:53:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:35.310 00:53:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.310 00:53:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108286 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108286 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 108286 ']' 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:35.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:35.311 00:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.311 [2024-07-12 00:53:40.151625] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:35.311 [2024-07-12 00:53:40.151829] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108286 ] 00:34:35.569 [2024-07-12 00:53:40.333466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.828 [2024-07-12 00:53:40.631163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.828 [2024-07-12 00:53:40.631177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.395 00:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:36.395 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:36.395 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:36.395 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:36.395 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:36.395 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:36.395 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:36.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:36.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:36.395 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:36.395 ' 00:34:39.676 [2024-07-12 00:53:43.949844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.607 [2024-07-12 00:53:45.233552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:43.137 [2024-07-12 00:53:47.579819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:45.037 [2024-07-12 00:53:49.613992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:46.411 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:46.411 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:46.411 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.411 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.411 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:46.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:46.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:46.412 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:46.412 00:53:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.978 00:53:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:46.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:46.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:46.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:46.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:46.978 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:46.978 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:46.978 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:46.978 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:46.978 ' 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:53.534 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:53.534 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:53.534 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:53.534 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108286 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 108286 ']' 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 108286 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108286 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:53.534 killing process with pid 108286 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108286' 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 108286 00:34:53.534 00:53:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 108286 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108286 ']' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108286 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 108286 ']' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 108286 00:34:54.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (108286) - No such process 00:34:54.466 Process with pid 108286 is not found 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 108286 is not found' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:54.466 00:34:54.466 real 0m19.394s 00:34:54.466 user 0m40.234s 00:34:54.466 sys 0m1.576s 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:54.466 00:53:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.466 ************************************ 00:34:54.466 END TEST spdkcli_nvmf_tcp 00:34:54.466 ************************************ 00:34:54.466 00:53:59 -- common/autotest_common.sh@1142 -- # return 0 00:34:54.466 00:53:59 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.466 00:53:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:54.466 00:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:54.466 00:53:59 -- common/autotest_common.sh@10 -- # set +x 00:34:54.466 ************************************ 00:34:54.466 START TEST nvmf_identify_passthru 00:34:54.466 ************************************ 00:34:54.466 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.724 * Looking for test storage... 00:34:54.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:54.724 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:54.724 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.724 00:53:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.724 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.724 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.724 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:54.724 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:54.725 Cannot find device "nvmf_tgt_br" 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:54.725 Cannot find device "nvmf_tgt_br2" 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:54.725 Cannot find device "nvmf_tgt_br" 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:54.725 Cannot find device "nvmf_tgt_br2" 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:54.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:54.725 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:54.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:34:54.983 00:34:54.983 --- 10.0.0.2 ping statistics --- 00:34:54.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.983 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:54.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:54.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:34:54.983 00:34:54.983 --- 10.0.0.3 ping statistics --- 00:34:54.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.983 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:54.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:34:54.983 00:34:54.983 --- 10.0.0.1 ping statistics --- 00:34:54.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.983 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:54.983 00:53:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:54.983 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:54.983 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:34:54.983 00:53:59 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:34:54.983 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:34:54.983 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:34:55.241 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:55.241 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:55.241 00:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:55.499 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:34:55.499 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:55.499 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:55.499 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108787 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.757 00:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108787 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 108787 ']' 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:55.757 00:54:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.015 [2024-07-12 00:54:00.713683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:56.015 [2024-07-12 00:54:00.713867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.015 [2024-07-12 00:54:00.907063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.580 [2024-07-12 00:54:01.231897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.580 [2024-07-12 00:54:01.231974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.580 [2024-07-12 00:54:01.231997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.580 [2024-07-12 00:54:01.232014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.581 [2024-07-12 00:54:01.232029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.581 [2024-07-12 00:54:01.232302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.581 [2024-07-12 00:54:01.232927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.581 [2024-07-12 00:54:01.233057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.581 [2024-07-12 00:54:01.233060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:56.838 00:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.838 00:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.838 00:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.402 [2024-07-12 00:54:02.110822] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:57.402 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.402 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.402 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.402 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 [2024-07-12 00:54:02.129327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 Nvme0n1 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 [2024-07-12 00:54:02.284240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.403 [ 00:34:57.403 { 00:34:57.403 "allow_any_host": true, 00:34:57.403 "hosts": [], 00:34:57.403 "listen_addresses": [], 00:34:57.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:57.403 "subtype": "Discovery" 00:34:57.403 }, 00:34:57.403 { 00:34:57.403 "allow_any_host": true, 00:34:57.403 "hosts": [], 00:34:57.403 "listen_addresses": [ 00:34:57.403 { 00:34:57.403 "adrfam": "IPv4", 00:34:57.403 "traddr": "10.0.0.2", 00:34:57.403 "trsvcid": "4420", 00:34:57.403 "trtype": "TCP" 00:34:57.403 } 00:34:57.403 ], 00:34:57.403 "max_cntlid": 65519, 00:34:57.403 "max_namespaces": 1, 00:34:57.403 "min_cntlid": 1, 00:34:57.403 "model_number": "SPDK bdev Controller", 00:34:57.403 "namespaces": [ 00:34:57.403 { 00:34:57.403 "bdev_name": "Nvme0n1", 00:34:57.403 "name": "Nvme0n1", 00:34:57.403 "nguid": "E83AFFAD73774932AB5C68A2E8582DE4", 00:34:57.403 "nsid": 1, 00:34:57.403 "uuid": "e83affad-7377-4932-ab5c-68a2e8582de4" 00:34:57.403 } 00:34:57.403 ], 00:34:57.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.403 "serial_number": "SPDK00000000000001", 00:34:57.403 "subtype": "NVMe" 00:34:57.403 } 00:34:57.403 ] 00:34:57.403 00:54:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:57.403 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:57.970 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:34:57.970 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:57.970 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:57.970 00:54:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.226 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.226 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:58.226 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:58.226 00:54:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:58.226 rmmod nvme_tcp 00:34:58.226 rmmod nvme_fabrics 00:34:58.226 rmmod nvme_keyring 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:58.226 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 108787 ']' 00:34:58.227 00:54:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 108787 00:34:58.227 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 108787 ']' 00:34:58.227 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 108787 00:34:58.227 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:58.227 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:58.227 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108787 00:34:58.483 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:58.483 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:58.483 killing process with pid 108787 00:34:58.483 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108787' 00:34:58.483 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 108787 00:34:58.483 00:54:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 108787 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.859 00:54:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.859 00:54:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.859 00:54:04 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:59.859 00:34:59.859 real 0m5.167s 00:34:59.859 user 0m12.394s 00:34:59.859 sys 0m1.406s 00:34:59.859 00:54:04 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.859 ************************************ 00:34:59.859 END TEST nvmf_identify_passthru 00:34:59.859 00:54:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.859 ************************************ 00:34:59.859 00:54:04 -- common/autotest_common.sh@1142 -- # return 0 00:34:59.859 00:54:04 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:59.859 00:54:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:59.859 00:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.859 00:54:04 -- common/autotest_common.sh@10 -- # set +x 00:34:59.859 ************************************ 00:34:59.860 START TEST nvmf_dif 00:34:59.860 ************************************ 00:34:59.860 00:54:04 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:59.860 * Looking for test storage... 00:34:59.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:59.860 00:54:04 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.860 00:54:04 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.860 00:54:04 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.860 00:54:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.860 00:54:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.860 00:54:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.860 00:54:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:59.860 00:54:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:59.860 00:54:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.860 00:54:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.860 00:54:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:59.860 Cannot find device "nvmf_tgt_br" 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@155 -- # true 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:59.860 Cannot find device "nvmf_tgt_br2" 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@156 -- # true 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:59.860 Cannot find device "nvmf_tgt_br" 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@158 -- # true 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:59.860 Cannot find device "nvmf_tgt_br2" 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@159 -- # true 00:34:59.860 00:54:04 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:00.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@162 -- # true 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:00.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@163 -- # true 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:00.119 00:54:04 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:00.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:00.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:35:00.119 00:35:00.119 --- 10.0.0.2 ping statistics --- 00:35:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.119 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:00.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:00.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:35:00.119 00:35:00.119 --- 10.0.0.3 ping statistics --- 00:35:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.119 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:00.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:00.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:35:00.119 00:35:00.119 --- 10.0.0.1 ping statistics --- 00:35:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.119 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:00.119 00:54:05 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:00.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:00.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:00.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:00.683 00:54:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:00.683 00:54:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=109171 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 109171 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 109171 ']' 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.683 00:54:05 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:00.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:00.683 00:54:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:00.683 [2024-07-12 00:54:05.597777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:00.683 [2024-07-12 00:54:05.597979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.941 [2024-07-12 00:54:05.778576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.199 [2024-07-12 00:54:06.078591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.199 [2024-07-12 00:54:06.078686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.199 [2024-07-12 00:54:06.078712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.199 [2024-07-12 00:54:06.078743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.199 [2024-07-12 00:54:06.078764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.199 [2024-07-12 00:54:06.078815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:01.766 00:54:06 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 00:54:06 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.766 00:54:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:01.766 00:54:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 [2024-07-12 00:54:06.604385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.766 00:54:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 ************************************ 00:35:01.766 START TEST fio_dif_1_default 00:35:01.766 ************************************ 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 bdev_null0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:01.766 [2024-07-12 00:54:06.652599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:01.766 { 00:35:01.766 "params": { 00:35:01.766 "name": "Nvme$subsystem", 00:35:01.766 "trtype": "$TEST_TRANSPORT", 00:35:01.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.766 "adrfam": "ipv4", 00:35:01.766 "trsvcid": "$NVMF_PORT", 00:35:01.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.766 "hdgst": ${hdgst:-false}, 00:35:01.766 "ddgst": ${ddgst:-false} 00:35:01.766 }, 00:35:01.766 "method": "bdev_nvme_attach_controller" 00:35:01.766 } 00:35:01.766 EOF 00:35:01.766 )") 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:01.766 "params": { 00:35:01.766 "name": "Nvme0", 00:35:01.766 "trtype": "tcp", 00:35:01.766 "traddr": "10.0.0.2", 00:35:01.766 "adrfam": "ipv4", 00:35:01.766 "trsvcid": "4420", 00:35:01.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.766 "hdgst": false, 00:35:01.766 "ddgst": false 00:35:01.766 }, 00:35:01.766 "method": "bdev_nvme_attach_controller" 00:35:01.766 }' 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:01.766 00:54:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:02.025 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:02.025 fio-3.35 00:35:02.025 Starting 1 thread 00:35:14.224 00:35:14.224 filename0: (groupid=0, jobs=1): err= 0: pid=109253: Fri Jul 12 00:54:17 2024 00:35:14.224 read: IOPS=182, BW=729KiB/s (747kB/s)(7312KiB/10029msec) 00:35:14.224 slat (usec): min=7, max=117, avg=14.75, stdev=11.08 00:35:14.224 clat (usec): min=514, max=41931, avg=21896.68, stdev=20162.38 00:35:14.224 lat (usec): min=522, max=42002, avg=21911.44, stdev=20162.43 00:35:14.224 clat percentiles (usec): 00:35:14.224 | 1.00th=[ 570], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 685], 00:35:14.224 | 30.00th=[ 709], 40.00th=[ 758], 50.00th=[40633], 60.00th=[41157], 00:35:14.224 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:14.224 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:14.224 | 99.99th=[41681] 00:35:14.224 bw ( KiB/s): min= 384, max= 1216, per=99.99%, avg=729.60, stdev=247.13, samples=20 00:35:14.224 iops : min= 96, max= 304, avg=182.40, stdev=61.78, samples=20 00:35:14.224 lat (usec) : 750=38.84%, 1000=8.42% 00:35:14.224 lat (msec) : 2=0.22%, 50=52.52% 00:35:14.224 cpu : usr=92.95%, sys=6.28%, ctx=182, majf=0, minf=1637 00:35:14.224 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.224 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.224 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:14.224 00:35:14.224 Run status group 0 (all jobs): 00:35:14.224 READ: bw=729KiB/s (747kB/s), 729KiB/s-729KiB/s (747kB/s-747kB/s), io=7312KiB (7487kB), run=10029-10029msec 00:35:14.224 ----------------------------------------------------- 00:35:14.224 Suppressions used: 00:35:14.224 count bytes template 00:35:14.224 1 8 /usr/src/fio/parse.c 00:35:14.224 1 8 libtcmalloc_minimal.so 00:35:14.224 1 904 libcrypto.so 00:35:14.224 ----------------------------------------------------- 00:35:14.224 00:35:14.224 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:14.224 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:14.224 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.224 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.225 00:35:14.225 real 0m12.503s 00:35:14.225 ************************************ 00:35:14.225 END TEST fio_dif_1_default 00:35:14.225 ************************************ 00:35:14.225 user 0m11.282s 00:35:14.225 sys 0m1.059s 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.225 00:54:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:14.506 00:54:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:14.506 00:54:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:14.506 00:54:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 ************************************ 00:35:14.506 START TEST fio_dif_1_multi_subsystems 00:35:14.506 ************************************ 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 bdev_null0 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 [2024-07-12 00:54:19.206993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 bdev_null1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:14.506 { 00:35:14.506 "params": { 00:35:14.506 "name": "Nvme$subsystem", 00:35:14.506 "trtype": "$TEST_TRANSPORT", 00:35:14.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.506 "adrfam": "ipv4", 00:35:14.506 "trsvcid": "$NVMF_PORT", 00:35:14.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.506 "hdgst": ${hdgst:-false}, 00:35:14.506 "ddgst": ${ddgst:-false} 00:35:14.506 }, 00:35:14.506 "method": "bdev_nvme_attach_controller" 00:35:14.506 } 00:35:14.506 EOF 00:35:14.506 )") 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:14.506 { 00:35:14.506 "params": { 00:35:14.506 "name": "Nvme$subsystem", 00:35:14.506 "trtype": "$TEST_TRANSPORT", 00:35:14.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.506 "adrfam": "ipv4", 00:35:14.506 "trsvcid": "$NVMF_PORT", 00:35:14.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.506 "hdgst": ${hdgst:-false}, 00:35:14.506 "ddgst": ${ddgst:-false} 00:35:14.506 }, 00:35:14.506 "method": "bdev_nvme_attach_controller" 00:35:14.506 } 00:35:14.506 EOF 00:35:14.506 )") 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:14.506 "params": { 00:35:14.506 "name": "Nvme0", 00:35:14.506 "trtype": "tcp", 00:35:14.506 "traddr": "10.0.0.2", 00:35:14.506 "adrfam": "ipv4", 00:35:14.506 "trsvcid": "4420", 00:35:14.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.506 "hdgst": false, 00:35:14.506 "ddgst": false 00:35:14.506 }, 00:35:14.506 "method": "bdev_nvme_attach_controller" 00:35:14.506 },{ 00:35:14.506 "params": { 00:35:14.506 "name": "Nvme1", 00:35:14.506 "trtype": "tcp", 00:35:14.506 "traddr": "10.0.0.2", 00:35:14.506 "adrfam": "ipv4", 00:35:14.506 "trsvcid": "4420", 00:35:14.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:14.506 "hdgst": false, 00:35:14.506 "ddgst": false 00:35:14.506 }, 00:35:14.506 "method": "bdev_nvme_attach_controller" 00:35:14.506 }' 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:14.506 00:54:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.764 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:14.764 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:14.764 fio-3.35 00:35:14.764 Starting 2 threads 00:35:26.960 00:35:26.960 filename0: (groupid=0, jobs=1): err= 0: pid=109418: Fri Jul 12 00:54:30 2024 00:35:26.960 read: IOPS=156, BW=627KiB/s (642kB/s)(6288KiB/10027msec) 00:35:26.960 slat (nsec): min=7919, max=65663, avg=13921.67, stdev=7975.66 00:35:26.960 clat (usec): min=528, max=41866, avg=25468.96, stdev=19650.26 00:35:26.960 lat (usec): min=536, max=41882, avg=25482.88, stdev=19650.11 00:35:26.960 clat percentiles (usec): 00:35:26.960 | 1.00th=[ 562], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 676], 00:35:26.960 | 30.00th=[ 766], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:35:26.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:26.960 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:26.960 | 99.99th=[41681] 00:35:26.960 bw ( KiB/s): min= 384, max= 1024, per=50.77%, avg=627.10, stdev=155.04, samples=20 00:35:26.960 iops : min= 96, max= 256, avg=156.75, stdev=38.73, samples=20 00:35:26.960 lat (usec) : 750=28.69%, 1000=7.95% 00:35:26.960 lat (msec) : 2=1.78%, 10=0.25%, 50=61.32% 00:35:26.960 cpu : usr=95.69%, sys=3.66%, ctx=21, majf=0, minf=1637 00:35:26.960 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.960 issued rwts: total=1572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.960 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:26.960 filename1: (groupid=0, jobs=1): err= 0: pid=109419: Fri Jul 12 00:54:30 2024 00:35:26.960 read: IOPS=152, BW=608KiB/s (623kB/s)(6096KiB/10024msec) 00:35:26.960 slat (usec): min=6, max=341, avg=15.41, stdev=12.15 00:35:26.960 clat (usec): min=537, max=42667, avg=26259.76, stdev=19470.76 00:35:26.960 lat (usec): min=546, max=42702, avg=26275.17, stdev=19470.42 00:35:26.960 clat percentiles (usec): 00:35:26.960 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 676], 00:35:26.960 | 30.00th=[ 758], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:35:26.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:26.960 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:26.960 | 99.99th=[42730] 00:35:26.960 bw ( KiB/s): min= 384, max= 1088, per=49.15%, avg=607.95, stdev=160.61, samples=20 00:35:26.960 iops : min= 96, max= 272, avg=151.95, stdev=40.14, samples=20 00:35:26.960 lat (usec) : 750=29.66%, 1000=4.79% 00:35:26.960 lat (msec) : 2=2.03%, 10=0.26%, 50=63.25% 00:35:26.960 cpu : usr=94.98%, sys=3.99%, ctx=148, majf=0, minf=1637 00:35:26.960 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.960 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.960 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:26.960 00:35:26.960 Run status group 0 (all jobs): 00:35:26.960 READ: bw=1235KiB/s (1265kB/s), 608KiB/s-627KiB/s (623kB/s-642kB/s), io=12.1MiB (12.7MB), run=10024-10027msec 00:35:26.960 ----------------------------------------------------- 00:35:26.960 Suppressions used: 00:35:26.960 count bytes template 00:35:26.960 2 16 /usr/src/fio/parse.c 00:35:26.960 1 8 libtcmalloc_minimal.so 00:35:26.960 1 904 libcrypto.so 00:35:26.960 ----------------------------------------------------- 00:35:26.960 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:26.960 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.961 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 ************************************ 00:35:27.219 END TEST fio_dif_1_multi_subsystems 00:35:27.219 ************************************ 00:35:27.219 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.219 00:35:27.219 real 0m12.722s 00:35:27.219 user 0m21.295s 00:35:27.219 sys 0m1.194s 00:35:27.219 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 00:54:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:27.219 00:54:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:27.219 00:54:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:27.219 00:54:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 ************************************ 00:35:27.219 START TEST fio_dif_rand_params 00:35:27.219 ************************************ 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 bdev_null0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.219 [2024-07-12 00:54:31.983061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.219 { 00:35:27.219 "params": { 00:35:27.219 "name": "Nvme$subsystem", 00:35:27.219 "trtype": "$TEST_TRANSPORT", 00:35:27.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.219 "adrfam": "ipv4", 00:35:27.219 "trsvcid": "$NVMF_PORT", 00:35:27.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.219 "hdgst": ${hdgst:-false}, 00:35:27.219 "ddgst": ${ddgst:-false} 00:35:27.219 }, 00:35:27.219 "method": "bdev_nvme_attach_controller" 00:35:27.219 } 00:35:27.219 EOF 00:35:27.219 )") 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.219 00:54:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:27.220 00:54:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.220 "params": { 00:35:27.220 "name": "Nvme0", 00:35:27.220 "trtype": "tcp", 00:35:27.220 "traddr": "10.0.0.2", 00:35:27.220 "adrfam": "ipv4", 00:35:27.220 "trsvcid": "4420", 00:35:27.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.220 "hdgst": false, 00:35:27.220 "ddgst": false 00:35:27.220 }, 00:35:27.220 "method": "bdev_nvme_attach_controller" 00:35:27.220 }' 00:35:27.220 00:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:27.220 00:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:27.220 00:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:35:27.220 00:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:27.220 00:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.478 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:27.478 ... 00:35:27.478 fio-3.35 00:35:27.478 Starting 3 threads 00:35:34.032 00:35:34.032 filename0: (groupid=0, jobs=1): err= 0: pid=109573: Fri Jul 12 00:54:38 2024 00:35:34.032 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(140MiB/5005msec) 00:35:34.032 slat (nsec): min=8486, max=51979, avg=19067.20, stdev=5677.70 00:35:34.032 clat (usec): min=7244, max=56600, avg=13393.23, stdev=4366.51 00:35:34.032 lat (usec): min=7260, max=56618, avg=13412.30, stdev=4366.37 00:35:34.032 clat percentiles (usec): 00:35:34.032 | 1.00th=[ 8455], 5.00th=[10683], 10.00th=[11731], 20.00th=[12256], 00:35:34.032 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:35:34.032 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14222], 95.00th=[14615], 00:35:34.032 | 99.00th=[51643], 99.50th=[53216], 99.90th=[55313], 99.95th=[56361], 00:35:34.032 | 99.99th=[56361] 00:35:34.032 bw ( KiB/s): min=24576, max=30976, per=38.20%, avg=28728.89, stdev=2035.07, samples=9 00:35:34.032 iops : min= 192, max= 242, avg=224.44, stdev=15.90, samples=9 00:35:34.032 lat (msec) : 10=3.84%, 20=95.08%, 100=1.07% 00:35:34.032 cpu : usr=92.05%, sys=6.20%, ctx=8, majf=0, minf=1637 00:35:34.032 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.032 issued rwts: total=1119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.033 filename0: (groupid=0, jobs=1): err= 0: pid=109574: Fri Jul 12 00:54:38 2024 00:35:34.033 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(105MiB/5003msec) 00:35:34.033 slat (nsec): min=6336, max=56579, avg=15345.74, stdev=7758.77 00:35:34.033 clat (usec): min=7665, max=21176, avg=17891.01, stdev=2171.33 00:35:34.033 lat (usec): min=7675, max=21198, avg=17906.36, stdev=2171.06 00:35:34.033 clat percentiles (usec): 00:35:34.033 | 1.00th=[10421], 5.00th=[11994], 10.00th=[15795], 20.00th=[17171], 00:35:34.033 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:35:34.033 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[19792], 00:35:34.033 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21103], 99.95th=[21103], 00:35:34.033 | 99.99th=[21103] 00:35:34.033 bw ( KiB/s): min=19968, max=23040, per=28.24%, avg=21238.44, stdev=938.11, samples=9 00:35:34.033 iops : min= 156, max= 180, avg=165.89, stdev= 7.32, samples=9 00:35:34.033 lat (msec) : 10=0.72%, 20=95.70%, 50=3.58% 00:35:34.033 cpu : usr=92.46%, sys=5.86%, ctx=10, majf=0, minf=1635 00:35:34.033 IO depths : 1=31.9%, 2=68.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.033 issued rwts: total=837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.033 filename0: (groupid=0, jobs=1): err= 0: pid=109575: Fri Jul 12 00:54:38 2024 00:35:34.033 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5029msec) 00:35:34.033 slat (nsec): min=6253, max=66540, avg=19756.53, stdev=7097.70 00:35:34.033 clat (usec): min=7417, max=57673, avg=15073.66, stdev=4721.37 00:35:34.033 lat (usec): min=7434, max=57692, avg=15093.42, stdev=4721.33 00:35:34.033 clat percentiles (usec): 00:35:34.033 | 1.00th=[ 8225], 5.00th=[10552], 10.00th=[12911], 20.00th=[13698], 00:35:34.033 | 30.00th=[14091], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:35:34.033 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:35:34.033 | 99.00th=[53740], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:35:34.033 | 99.99th=[57934] 00:35:34.033 bw ( KiB/s): min=23296, max=27648, per=33.90%, avg=25497.60, stdev=1403.21, samples=10 00:35:34.033 iops : min= 182, max= 216, avg=199.20, stdev=10.96, samples=10 00:35:34.033 lat (msec) : 10=4.00%, 20=94.79%, 50=0.10%, 100=1.10% 00:35:34.033 cpu : usr=92.44%, sys=5.77%, ctx=11, majf=0, minf=1637 00:35:34.033 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.033 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.033 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.033 00:35:34.033 Run status group 0 (all jobs): 00:35:34.033 READ: bw=73.4MiB/s (77.0MB/s), 20.9MiB/s-27.9MiB/s (21.9MB/s-29.3MB/s), io=369MiB (387MB), run=5003-5029msec 00:35:34.600 ----------------------------------------------------- 00:35:34.600 Suppressions used: 00:35:34.600 count bytes template 00:35:34.600 5 44 /usr/src/fio/parse.c 00:35:34.600 1 8 libtcmalloc_minimal.so 00:35:34.600 1 904 libcrypto.so 00:35:34.600 ----------------------------------------------------- 00:35:34.600 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 bdev_null0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 [2024-07-12 00:54:39.334582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 bdev_null1 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 bdev_null2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.600 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.601 { 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme$subsystem", 00:35:34.601 "trtype": "$TEST_TRANSPORT", 00:35:34.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "$NVMF_PORT", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.601 "hdgst": ${hdgst:-false}, 00:35:34.601 "ddgst": ${ddgst:-false} 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 } 00:35:34.601 EOF 00:35:34.601 )") 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.601 { 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme$subsystem", 00:35:34.601 "trtype": "$TEST_TRANSPORT", 00:35:34.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "$NVMF_PORT", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.601 "hdgst": ${hdgst:-false}, 00:35:34.601 "ddgst": ${ddgst:-false} 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 } 00:35:34.601 EOF 00:35:34.601 )") 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.601 { 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme$subsystem", 00:35:34.601 "trtype": "$TEST_TRANSPORT", 00:35:34.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "$NVMF_PORT", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.601 "hdgst": ${hdgst:-false}, 00:35:34.601 "ddgst": ${ddgst:-false} 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 } 00:35:34.601 EOF 00:35:34.601 )") 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme0", 00:35:34.601 "trtype": "tcp", 00:35:34.601 "traddr": "10.0.0.2", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "4420", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.601 "hdgst": false, 00:35:34.601 "ddgst": false 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 },{ 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme1", 00:35:34.601 "trtype": "tcp", 00:35:34.601 "traddr": "10.0.0.2", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "4420", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.601 "hdgst": false, 00:35:34.601 "ddgst": false 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 },{ 00:35:34.601 "params": { 00:35:34.601 "name": "Nvme2", 00:35:34.601 "trtype": "tcp", 00:35:34.601 "traddr": "10.0.0.2", 00:35:34.601 "adrfam": "ipv4", 00:35:34.601 "trsvcid": "4420", 00:35:34.601 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:34.601 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:34.601 "hdgst": false, 00:35:34.601 "ddgst": false 00:35:34.601 }, 00:35:34.601 "method": "bdev_nvme_attach_controller" 00:35:34.601 }' 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:34.601 00:54:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.860 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.860 ... 00:35:34.860 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.860 ... 00:35:34.860 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:34.860 ... 00:35:34.860 fio-3.35 00:35:34.860 Starting 24 threads 00:35:47.078 00:35:47.078 filename0: (groupid=0, jobs=1): err= 0: pid=109676: Fri Jul 12 00:54:51 2024 00:35:47.078 read: IOPS=186, BW=748KiB/s (766kB/s)(7548KiB/10091msec) 00:35:47.078 slat (usec): min=5, max=8038, avg=23.73, stdev=226.23 00:35:47.078 clat (msec): min=14, max=215, avg=85.25, stdev=29.38 00:35:47.078 lat (msec): min=14, max=215, avg=85.27, stdev=29.38 00:35:47.078 clat percentiles (msec): 00:35:47.078 | 1.00th=[ 16], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 63], 00:35:47.078 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 88], 00:35:47.078 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:35:47.078 | 99.00th=[ 180], 99.50th=[ 203], 99.90th=[ 215], 99.95th=[ 215], 00:35:47.078 | 99.99th=[ 215] 00:35:47.078 bw ( KiB/s): min= 512, max= 1264, per=4.58%, avg=748.30, stdev=177.39, samples=20 00:35:47.078 iops : min= 128, max= 316, avg=187.05, stdev=44.36, samples=20 00:35:47.078 lat (msec) : 20=1.70%, 50=4.13%, 100=70.11%, 250=24.06% 00:35:47.078 cpu : usr=40.10%, sys=0.85%, ctx=1165, majf=0, minf=1635 00:35:47.078 IO depths : 1=1.2%, 2=3.2%, 4=11.8%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:47.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 issued rwts: total=1887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.078 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.078 filename0: (groupid=0, jobs=1): err= 0: pid=109677: Fri Jul 12 00:54:51 2024 00:35:47.078 read: IOPS=192, BW=771KiB/s (789kB/s)(7752KiB/10056msec) 00:35:47.078 slat (usec): min=5, max=8036, avg=27.23, stdev=315.35 00:35:47.078 clat (msec): min=45, max=158, avg=82.83, stdev=21.51 00:35:47.078 lat (msec): min=45, max=158, avg=82.85, stdev=21.51 00:35:47.078 clat percentiles (msec): 00:35:47.078 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 63], 00:35:47.078 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 87], 00:35:47.078 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 111], 95.00th=[ 121], 00:35:47.078 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:35:47.078 | 99.99th=[ 159] 00:35:47.078 bw ( KiB/s): min= 608, max= 944, per=4.70%, avg=768.45, stdev=78.15, samples=20 00:35:47.078 iops : min= 152, max= 236, avg=192.05, stdev=19.52, samples=20 00:35:47.078 lat (msec) : 50=2.43%, 100=77.14%, 250=20.43% 00:35:47.078 cpu : usr=35.10%, sys=0.61%, ctx=961, majf=0, minf=1637 00:35:47.078 IO depths : 1=0.2%, 2=0.3%, 4=5.4%, 8=80.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:35:47.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 complete : 0=0.0%, 4=88.9%, 8=7.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.078 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.078 filename0: (groupid=0, jobs=1): err= 0: pid=109678: Fri Jul 12 00:54:51 2024 00:35:47.078 read: IOPS=149, BW=598KiB/s (612kB/s)(5980KiB/10003msec) 00:35:47.078 slat (nsec): min=5454, max=56370, avg=15501.46, stdev=7199.59 00:35:47.078 clat (msec): min=5, max=204, avg=106.93, stdev=30.47 00:35:47.078 lat (msec): min=5, max=204, avg=106.95, stdev=30.47 00:35:47.078 clat percentiles (msec): 00:35:47.078 | 1.00th=[ 11], 5.00th=[ 59], 10.00th=[ 85], 20.00th=[ 93], 00:35:47.078 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 110], 00:35:47.078 | 70.00th=[ 117], 80.00th=[ 127], 90.00th=[ 142], 95.00th=[ 161], 00:35:47.078 | 99.00th=[ 190], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:35:47.078 | 99.99th=[ 205] 00:35:47.078 bw ( KiB/s): min= 384, max= 761, per=3.50%, avg=572.26, stdev=83.24, samples=19 00:35:47.078 iops : min= 96, max= 190, avg=143.05, stdev=20.78, samples=19 00:35:47.078 lat (msec) : 10=0.47%, 20=2.14%, 50=1.07%, 100=37.59%, 250=58.73% 00:35:47.078 cpu : usr=35.74%, sys=0.74%, ctx=1024, majf=0, minf=1636 00:35:47.078 IO depths : 1=3.2%, 2=7.0%, 4=17.5%, 8=62.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:47.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 complete : 0=0.0%, 4=92.0%, 8=2.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 issued rwts: total=1495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.078 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.078 filename0: (groupid=0, jobs=1): err= 0: pid=109679: Fri Jul 12 00:54:51 2024 00:35:47.078 read: IOPS=193, BW=775KiB/s (793kB/s)(7800KiB/10070msec) 00:35:47.078 slat (usec): min=5, max=8056, avg=26.22, stdev=272.85 00:35:47.078 clat (msec): min=16, max=161, avg=82.32, stdev=24.07 00:35:47.078 lat (msec): min=16, max=161, avg=82.35, stdev=24.07 00:35:47.078 clat percentiles (msec): 00:35:47.078 | 1.00th=[ 31], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 62], 00:35:47.078 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 86], 00:35:47.078 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 116], 95.00th=[ 123], 00:35:47.078 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:35:47.078 | 99.99th=[ 163] 00:35:47.078 bw ( KiB/s): min= 640, max= 986, per=4.73%, avg=772.90, stdev=89.96, samples=20 00:35:47.078 iops : min= 160, max= 246, avg=193.15, stdev=22.40, samples=20 00:35:47.078 lat (msec) : 20=0.36%, 50=6.15%, 100=69.69%, 250=23.79% 00:35:47.078 cpu : usr=39.93%, sys=0.76%, ctx=942, majf=0, minf=1637 00:35:47.078 IO depths : 1=0.1%, 2=0.3%, 4=4.5%, 8=80.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:35:47.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 complete : 0=0.0%, 4=88.9%, 8=7.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.078 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.078 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.078 filename0: (groupid=0, jobs=1): err= 0: pid=109680: Fri Jul 12 00:54:51 2024 00:35:47.078 read: IOPS=165, BW=660KiB/s (676kB/s)(6608KiB/10012msec) 00:35:47.078 slat (usec): min=5, max=8032, avg=26.46, stdev=252.29 00:35:47.078 clat (msec): min=39, max=200, avg=96.75, stdev=27.12 00:35:47.078 lat (msec): min=39, max=200, avg=96.77, stdev=27.13 00:35:47.078 clat percentiles (msec): 00:35:47.078 | 1.00th=[ 51], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 72], 00:35:47.078 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 100], 00:35:47.078 | 70.00th=[ 107], 80.00th=[ 116], 90.00th=[ 132], 95.00th=[ 150], 00:35:47.078 | 99.00th=[ 186], 99.50th=[ 197], 99.90th=[ 201], 99.95th=[ 201], 00:35:47.078 | 99.99th=[ 201] 00:35:47.078 bw ( KiB/s): min= 510, max= 816, per=3.98%, avg=649.63, stdev=86.17, samples=19 00:35:47.078 iops : min= 127, max= 204, avg=162.32, stdev=21.56, samples=19 00:35:47.079 lat (msec) : 50=0.97%, 100=60.17%, 250=38.86% 00:35:47.079 cpu : usr=36.86%, sys=0.74%, ctx=980, majf=0, minf=1637 00:35:47.079 IO depths : 1=2.1%, 2=4.5%, 4=13.0%, 8=69.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename0: (groupid=0, jobs=1): err= 0: pid=109681: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=148, BW=594KiB/s (608kB/s)(5940KiB/10005msec) 00:35:47.079 slat (usec): min=5, max=8036, avg=25.63, stdev=255.06 00:35:47.079 clat (msec): min=20, max=185, avg=107.58, stdev=27.59 00:35:47.079 lat (msec): min=20, max=185, avg=107.60, stdev=27.59 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 21], 5.00th=[ 63], 10.00th=[ 74], 20.00th=[ 94], 00:35:47.079 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 106], 60.00th=[ 109], 00:35:47.079 | 70.00th=[ 121], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 157], 00:35:47.079 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 186], 00:35:47.079 | 99.99th=[ 186] 00:35:47.079 bw ( KiB/s): min= 496, max= 696, per=3.56%, avg=581.05, stdev=73.28, samples=19 00:35:47.079 iops : min= 124, max= 174, avg=145.26, stdev=18.32, samples=19 00:35:47.079 lat (msec) : 50=2.15%, 100=40.54%, 250=57.31% 00:35:47.079 cpu : usr=37.77%, sys=0.98%, ctx=1101, majf=0, minf=1636 00:35:47.079 IO depths : 1=3.5%, 2=8.1%, 4=20.5%, 8=58.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename0: (groupid=0, jobs=1): err= 0: pid=109682: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=163, BW=653KiB/s (669kB/s)(6540KiB/10011msec) 00:35:47.079 slat (usec): min=5, max=8066, avg=20.47, stdev=199.54 00:35:47.079 clat (msec): min=36, max=177, avg=97.76, stdev=25.81 00:35:47.079 lat (msec): min=36, max=177, avg=97.79, stdev=25.81 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 75], 00:35:47.079 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 101], 00:35:47.079 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 142], 00:35:47.079 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 178], 00:35:47.079 | 99.99th=[ 178] 00:35:47.079 bw ( KiB/s): min= 507, max= 848, per=3.96%, avg=647.32, stdev=94.08, samples=19 00:35:47.079 iops : min= 126, max= 212, avg=161.74, stdev=23.58, samples=19 00:35:47.079 lat (msec) : 50=2.57%, 100=56.09%, 250=41.35% 00:35:47.079 cpu : usr=38.38%, sys=0.97%, ctx=1323, majf=0, minf=1634 00:35:47.079 IO depths : 1=2.0%, 2=4.6%, 4=13.3%, 8=68.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename0: (groupid=0, jobs=1): err= 0: pid=109683: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=146, BW=588KiB/s (602kB/s)(5888KiB/10020msec) 00:35:47.079 slat (usec): min=4, max=8035, avg=25.31, stdev=282.04 00:35:47.079 clat (msec): min=17, max=206, avg=108.67, stdev=31.42 00:35:47.079 lat (msec): min=17, max=206, avg=108.69, stdev=31.41 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 88], 00:35:47.079 | 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 111], 00:35:47.079 | 70.00th=[ 120], 80.00th=[ 133], 90.00th=[ 153], 95.00th=[ 167], 00:35:47.079 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 207], 99.95th=[ 207], 00:35:47.079 | 99.99th=[ 207] 00:35:47.079 bw ( KiB/s): min= 512, max= 672, per=3.50%, avg=572.53, stdev=62.95, samples=19 00:35:47.079 iops : min= 128, max= 168, avg=143.11, stdev=15.71, samples=19 00:35:47.079 lat (msec) : 20=0.61%, 50=1.56%, 100=38.86%, 250=58.97% 00:35:47.079 cpu : usr=39.07%, sys=0.98%, ctx=1156, majf=0, minf=1634 00:35:47.079 IO depths : 1=2.7%, 2=6.0%, 4=16.0%, 8=65.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename1: (groupid=0, jobs=1): err= 0: pid=109684: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=186, BW=745KiB/s (763kB/s)(7504KiB/10071msec) 00:35:47.079 slat (usec): min=5, max=4757, avg=23.13, stdev=170.61 00:35:47.079 clat (msec): min=18, max=179, avg=85.58, stdev=26.53 00:35:47.079 lat (msec): min=18, max=179, avg=85.60, stdev=26.54 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 21], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 64], 00:35:47.079 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 91], 00:35:47.079 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 136], 00:35:47.079 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 180], 00:35:47.079 | 99.99th=[ 180] 00:35:47.079 bw ( KiB/s): min= 512, max= 1017, per=4.56%, avg=744.90, stdev=141.44, samples=20 00:35:47.079 iops : min= 128, max= 254, avg=186.15, stdev=35.34, samples=20 00:35:47.079 lat (msec) : 20=0.85%, 50=2.56%, 100=70.52%, 250=26.07% 00:35:47.079 cpu : usr=38.29%, sys=0.78%, ctx=1189, majf=0, minf=1635 00:35:47.079 IO depths : 1=1.3%, 2=2.9%, 4=10.0%, 8=73.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename1: (groupid=0, jobs=1): err= 0: pid=109685: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=159, BW=639KiB/s (654kB/s)(6396KiB/10010msec) 00:35:47.079 slat (usec): min=6, max=8031, avg=27.92, stdev=300.71 00:35:47.079 clat (msec): min=27, max=203, avg=99.96, stdev=27.53 00:35:47.079 lat (msec): min=27, max=203, avg=99.99, stdev=27.54 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 28], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 74], 00:35:47.079 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 105], 00:35:47.079 | 70.00th=[ 109], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 157], 00:35:47.079 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 205], 99.95th=[ 205], 00:35:47.079 | 99.99th=[ 205] 00:35:47.079 bw ( KiB/s): min= 512, max= 792, per=3.83%, avg=626.11, stdev=100.66, samples=19 00:35:47.079 iops : min= 128, max= 198, avg=156.53, stdev=25.16, samples=19 00:35:47.079 lat (msec) : 50=1.38%, 100=53.91%, 250=44.72% 00:35:47.079 cpu : usr=38.25%, sys=0.85%, ctx=1114, majf=0, minf=1634 00:35:47.079 IO depths : 1=1.8%, 2=4.2%, 4=14.0%, 8=68.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename1: (groupid=0, jobs=1): err= 0: pid=109686: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=184, BW=740KiB/s (758kB/s)(7448KiB/10065msec) 00:35:47.079 slat (usec): min=5, max=4036, avg=20.31, stdev=131.88 00:35:47.079 clat (msec): min=20, max=213, avg=86.23, stdev=26.63 00:35:47.079 lat (msec): min=20, max=213, avg=86.25, stdev=26.62 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 29], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 65], 00:35:47.079 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 90], 00:35:47.079 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 122], 95.00th=[ 133], 00:35:47.079 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 213], 99.95th=[ 213], 00:35:47.079 | 99.99th=[ 213] 00:35:47.079 bw ( KiB/s): min= 464, max= 913, per=4.51%, avg=737.75, stdev=130.25, samples=20 00:35:47.079 iops : min= 116, max= 228, avg=184.35, stdev=32.63, samples=20 00:35:47.079 lat (msec) : 50=2.69%, 100=69.66%, 250=27.66% 00:35:47.079 cpu : usr=37.36%, sys=0.83%, ctx=1076, majf=0, minf=1634 00:35:47.079 IO depths : 1=0.9%, 2=2.0%, 4=8.4%, 8=76.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename1: (groupid=0, jobs=1): err= 0: pid=109687: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=153, BW=615KiB/s (629kB/s)(6156KiB/10015msec) 00:35:47.079 slat (usec): min=5, max=4037, avg=17.63, stdev=102.73 00:35:47.079 clat (msec): min=15, max=223, avg=103.94, stdev=28.68 00:35:47.079 lat (msec): min=15, max=223, avg=103.95, stdev=28.68 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 19], 5.00th=[ 65], 10.00th=[ 73], 20.00th=[ 88], 00:35:47.079 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 105], 00:35:47.079 | 70.00th=[ 113], 80.00th=[ 125], 90.00th=[ 132], 95.00th=[ 167], 00:35:47.079 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:35:47.079 | 99.99th=[ 224] 00:35:47.079 bw ( KiB/s): min= 512, max= 736, per=3.68%, avg=600.84, stdev=73.75, samples=19 00:35:47.079 iops : min= 128, max= 184, avg=150.21, stdev=18.44, samples=19 00:35:47.079 lat (msec) : 20=1.62%, 50=0.45%, 100=48.99%, 250=48.93% 00:35:47.079 cpu : usr=45.62%, sys=1.06%, ctx=1354, majf=0, minf=1636 00:35:47.079 IO depths : 1=3.9%, 2=8.3%, 4=18.9%, 8=60.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:47.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.079 issued rwts: total=1539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.079 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.079 filename1: (groupid=0, jobs=1): err= 0: pid=109688: Fri Jul 12 00:54:51 2024 00:35:47.079 read: IOPS=216, BW=864KiB/s (885kB/s)(8688KiB/10052msec) 00:35:47.079 slat (nsec): min=4520, max=55495, avg=14046.91, stdev=6337.42 00:35:47.079 clat (usec): min=1912, max=179937, avg=73767.46, stdev=37136.81 00:35:47.079 lat (usec): min=1921, max=179959, avg=73781.51, stdev=37137.43 00:35:47.079 clat percentiles (msec): 00:35:47.079 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 57], 00:35:47.079 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 86], 00:35:47.079 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 118], 95.00th=[ 124], 00:35:47.079 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:35:47.079 | 99.99th=[ 180] 00:35:47.079 bw ( KiB/s): min= 512, max= 3456, per=5.28%, avg=862.40, stdev=618.14, samples=20 00:35:47.079 iops : min= 128, max= 864, avg=215.60, stdev=154.54, samples=20 00:35:47.079 lat (msec) : 2=0.87%, 4=10.17%, 10=2.95%, 20=2.21%, 50=1.75% 00:35:47.079 lat (msec) : 100=65.33%, 250=16.71% 00:35:47.079 cpu : usr=38.69%, sys=0.98%, ctx=1194, majf=0, minf=1635 00:35:47.080 IO depths : 1=2.4%, 2=4.9%, 4=13.8%, 8=68.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename1: (groupid=0, jobs=1): err= 0: pid=109689: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=180, BW=723KiB/s (740kB/s)(7264KiB/10050msec) 00:35:47.080 slat (usec): min=5, max=4136, avg=17.19, stdev=96.94 00:35:47.080 clat (usec): min=1258, max=203804, avg=88263.97, stdev=33734.97 00:35:47.080 lat (usec): min=1283, max=203814, avg=88281.16, stdev=33730.30 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 4], 5.00th=[ 25], 10.00th=[ 58], 20.00th=[ 66], 00:35:47.080 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:35:47.080 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 144], 00:35:47.080 | 99.00th=[ 171], 99.50th=[ 197], 99.90th=[ 205], 99.95th=[ 205], 00:35:47.080 | 99.99th=[ 205] 00:35:47.080 bw ( KiB/s): min= 512, max= 1396, per=4.42%, avg=722.60, stdev=184.02, samples=20 00:35:47.080 iops : min= 128, max= 349, avg=180.60, stdev=46.02, samples=20 00:35:47.080 lat (msec) : 2=0.11%, 4=1.05%, 10=2.26%, 20=0.99%, 50=3.25% 00:35:47.080 lat (msec) : 100=60.63%, 250=31.72% 00:35:47.080 cpu : usr=35.83%, sys=0.73%, ctx=1113, majf=0, minf=1637 00:35:47.080 IO depths : 1=1.9%, 2=4.3%, 4=14.0%, 8=68.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename1: (groupid=0, jobs=1): err= 0: pid=109690: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=174, BW=699KiB/s (715kB/s)(7004KiB/10025msec) 00:35:47.080 slat (usec): min=5, max=8033, avg=22.69, stdev=221.31 00:35:47.080 clat (msec): min=47, max=216, avg=91.45, stdev=25.03 00:35:47.080 lat (msec): min=47, max=216, avg=91.47, stdev=25.03 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 70], 00:35:47.080 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:35:47.080 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 140], 00:35:47.080 | 99.00th=[ 167], 99.50th=[ 218], 99.90th=[ 218], 99.95th=[ 218], 00:35:47.080 | 99.99th=[ 218] 00:35:47.080 bw ( KiB/s): min= 512, max= 848, per=4.24%, avg=693.90, stdev=90.01, samples=20 00:35:47.080 iops : min= 128, max= 212, avg=173.45, stdev=22.48, samples=20 00:35:47.080 lat (msec) : 50=0.34%, 100=69.27%, 250=30.38% 00:35:47.080 cpu : usr=38.05%, sys=0.85%, ctx=1083, majf=0, minf=1636 00:35:47.080 IO depths : 1=1.4%, 2=3.4%, 4=12.2%, 8=71.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename1: (groupid=0, jobs=1): err= 0: pid=109691: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=150, BW=601KiB/s (616kB/s)(6016KiB/10005msec) 00:35:47.080 slat (nsec): min=5693, max=66896, avg=14300.10, stdev=6575.61 00:35:47.080 clat (msec): min=9, max=197, avg=106.32, stdev=30.22 00:35:47.080 lat (msec): min=9, max=197, avg=106.33, stdev=30.22 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 10], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 88], 00:35:47.080 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 110], 00:35:47.080 | 70.00th=[ 120], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 161], 00:35:47.080 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 197], 00:35:47.080 | 99.99th=[ 197] 00:35:47.080 bw ( KiB/s): min= 510, max= 752, per=3.59%, avg=586.00, stdev=73.48, samples=19 00:35:47.080 iops : min= 127, max= 188, avg=146.47, stdev=18.40, samples=19 00:35:47.080 lat (msec) : 10=1.06%, 50=1.33%, 100=49.47%, 250=48.14% 00:35:47.080 cpu : usr=38.37%, sys=0.93%, ctx=1341, majf=0, minf=1636 00:35:47.080 IO depths : 1=3.4%, 2=8.0%, 4=19.7%, 8=59.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename2: (groupid=0, jobs=1): err= 0: pid=109692: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=152, BW=611KiB/s (626kB/s)(6120KiB/10013msec) 00:35:47.080 slat (usec): min=5, max=8036, avg=21.27, stdev=205.16 00:35:47.080 clat (msec): min=22, max=179, avg=104.53, stdev=25.65 00:35:47.080 lat (msec): min=22, max=179, avg=104.55, stdev=25.65 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 39], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 85], 00:35:47.080 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:35:47.080 | 70.00th=[ 120], 80.00th=[ 129], 90.00th=[ 142], 95.00th=[ 155], 00:35:47.080 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:35:47.080 | 99.99th=[ 180] 00:35:47.080 bw ( KiB/s): min= 510, max= 744, per=3.72%, avg=607.05, stdev=74.75, samples=19 00:35:47.080 iops : min= 127, max= 186, avg=151.68, stdev=18.70, samples=19 00:35:47.080 lat (msec) : 50=1.50%, 100=51.44%, 250=47.06% 00:35:47.080 cpu : usr=32.79%, sys=0.69%, ctx=957, majf=0, minf=1634 00:35:47.080 IO depths : 1=2.2%, 2=5.1%, 4=14.1%, 8=68.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename2: (groupid=0, jobs=1): err= 0: pid=109693: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=170, BW=684KiB/s (700kB/s)(6848KiB/10016msec) 00:35:47.080 slat (usec): min=4, max=8046, avg=19.41, stdev=194.23 00:35:47.080 clat (msec): min=34, max=197, avg=93.47, stdev=25.63 00:35:47.080 lat (msec): min=34, max=197, avg=93.49, stdev=25.62 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 42], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 70], 00:35:47.080 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 96], 00:35:47.080 | 70.00th=[ 102], 80.00th=[ 113], 90.00th=[ 127], 95.00th=[ 148], 00:35:47.080 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 199], 99.95th=[ 199], 00:35:47.080 | 99.99th=[ 199] 00:35:47.080 bw ( KiB/s): min= 512, max= 912, per=4.15%, avg=678.40, stdev=116.33, samples=20 00:35:47.080 iops : min= 128, max= 228, avg=169.60, stdev=29.08, samples=20 00:35:47.080 lat (msec) : 50=2.80%, 100=66.06%, 250=31.13% 00:35:47.080 cpu : usr=41.19%, sys=0.95%, ctx=1325, majf=0, minf=1636 00:35:47.080 IO depths : 1=2.3%, 2=5.0%, 4=13.1%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=91.0%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename2: (groupid=0, jobs=1): err= 0: pid=109694: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=153, BW=613KiB/s (628kB/s)(6144KiB/10015msec) 00:35:47.080 slat (usec): min=5, max=3541, avg=17.96, stdev=90.32 00:35:47.080 clat (msec): min=15, max=191, avg=104.17, stdev=26.54 00:35:47.080 lat (msec): min=15, max=191, avg=104.19, stdev=26.54 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 19], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 89], 00:35:47.080 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 106], 00:35:47.080 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 136], 95.00th=[ 155], 00:35:47.080 | 99.00th=[ 171], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:35:47.080 | 99.99th=[ 192] 00:35:47.080 bw ( KiB/s): min= 512, max= 640, per=3.63%, avg=592.84, stdev=63.44, samples=19 00:35:47.080 iops : min= 128, max= 160, avg=148.21, stdev=15.86, samples=19 00:35:47.080 lat (msec) : 20=1.30%, 50=0.39%, 100=52.60%, 250=45.70% 00:35:47.080 cpu : usr=38.99%, sys=1.07%, ctx=1194, majf=0, minf=1636 00:35:47.080 IO depths : 1=3.0%, 2=6.6%, 4=16.5%, 8=64.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename2: (groupid=0, jobs=1): err= 0: pid=109695: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=172, BW=690KiB/s (706kB/s)(6928KiB/10044msec) 00:35:47.080 slat (usec): min=5, max=8032, avg=25.57, stdev=241.30 00:35:47.080 clat (msec): min=45, max=174, avg=92.44, stdev=26.39 00:35:47.080 lat (msec): min=45, max=174, avg=92.47, stdev=26.39 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 67], 00:35:47.080 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:35:47.080 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 127], 95.00th=[ 146], 00:35:47.080 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 176], 00:35:47.080 | 99.99th=[ 176] 00:35:47.080 bw ( KiB/s): min= 512, max= 864, per=4.20%, avg=686.40, stdev=104.28, samples=20 00:35:47.080 iops : min= 128, max= 216, avg=171.55, stdev=26.08, samples=20 00:35:47.080 lat (msec) : 50=2.08%, 100=62.01%, 250=35.91% 00:35:47.080 cpu : usr=41.66%, sys=0.89%, ctx=1257, majf=0, minf=1634 00:35:47.080 IO depths : 1=1.3%, 2=2.6%, 4=9.2%, 8=74.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.080 issued rwts: total=1732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.080 filename2: (groupid=0, jobs=1): err= 0: pid=109696: Fri Jul 12 00:54:51 2024 00:35:47.080 read: IOPS=163, BW=654KiB/s (670kB/s)(6552KiB/10016msec) 00:35:47.080 slat (usec): min=5, max=8045, avg=30.25, stdev=313.25 00:35:47.080 clat (msec): min=16, max=215, avg=97.61, stdev=28.62 00:35:47.080 lat (msec): min=16, max=215, avg=97.64, stdev=28.61 00:35:47.080 clat percentiles (msec): 00:35:47.080 | 1.00th=[ 44], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 72], 00:35:47.080 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 101], 00:35:47.080 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 157], 00:35:47.080 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 215], 00:35:47.080 | 99.99th=[ 215] 00:35:47.080 bw ( KiB/s): min= 510, max= 888, per=3.88%, avg=634.16, stdev=116.62, samples=19 00:35:47.080 iops : min= 127, max= 222, avg=158.47, stdev=29.17, samples=19 00:35:47.080 lat (msec) : 20=0.37%, 50=2.26%, 100=57.45%, 250=39.93% 00:35:47.080 cpu : usr=37.04%, sys=0.91%, ctx=1075, majf=0, minf=1636 00:35:47.080 IO depths : 1=1.9%, 2=4.2%, 4=11.8%, 8=70.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:47.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.081 filename2: (groupid=0, jobs=1): err= 0: pid=109697: Fri Jul 12 00:54:51 2024 00:35:47.081 read: IOPS=184, BW=738KiB/s (756kB/s)(7424KiB/10062msec) 00:35:47.081 slat (usec): min=4, max=8034, avg=29.27, stdev=308.48 00:35:47.081 clat (msec): min=25, max=192, avg=86.34, stdev=27.41 00:35:47.081 lat (msec): min=25, max=192, avg=86.36, stdev=27.42 00:35:47.081 clat percentiles (msec): 00:35:47.081 | 1.00th=[ 31], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 62], 00:35:47.081 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 92], 00:35:47.081 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:35:47.081 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 174], 99.95th=[ 192], 00:35:47.081 | 99.99th=[ 192] 00:35:47.081 bw ( KiB/s): min= 508, max= 986, per=4.50%, avg=735.40, stdev=133.35, samples=20 00:35:47.081 iops : min= 127, max= 246, avg=183.80, stdev=33.30, samples=20 00:35:47.081 lat (msec) : 50=2.26%, 100=73.22%, 250=24.52% 00:35:47.081 cpu : usr=35.39%, sys=0.78%, ctx=981, majf=0, minf=1636 00:35:47.081 IO depths : 1=1.2%, 2=2.7%, 4=9.9%, 8=74.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:47.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.081 filename2: (groupid=0, jobs=1): err= 0: pid=109698: Fri Jul 12 00:54:51 2024 00:35:47.081 read: IOPS=179, BW=719KiB/s (736kB/s)(7244KiB/10081msec) 00:35:47.081 slat (usec): min=5, max=8034, avg=22.23, stdev=210.89 00:35:47.081 clat (msec): min=25, max=179, avg=88.74, stdev=26.11 00:35:47.081 lat (msec): min=25, max=179, avg=88.76, stdev=26.11 00:35:47.081 clat percentiles (msec): 00:35:47.081 | 1.00th=[ 46], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 64], 00:35:47.081 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 96], 00:35:47.081 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:35:47.081 | 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:35:47.081 | 99.99th=[ 180] 00:35:47.081 bw ( KiB/s): min= 552, max= 938, per=4.39%, avg=717.35, stdev=108.78, samples=20 00:35:47.081 iops : min= 138, max= 234, avg=179.30, stdev=27.12, samples=20 00:35:47.081 lat (msec) : 50=2.93%, 100=70.84%, 250=26.23% 00:35:47.081 cpu : usr=32.98%, sys=0.67%, ctx=961, majf=0, minf=1637 00:35:47.081 IO depths : 1=1.2%, 2=2.9%, 4=11.0%, 8=72.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:47.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.081 filename2: (groupid=0, jobs=1): err= 0: pid=109699: Fri Jul 12 00:54:51 2024 00:35:47.081 read: IOPS=172, BW=691KiB/s (707kB/s)(6948KiB/10057msec) 00:35:47.081 slat (usec): min=5, max=8028, avg=19.54, stdev=192.39 00:35:47.081 clat (msec): min=27, max=215, avg=92.28, stdev=27.84 00:35:47.081 lat (msec): min=27, max=216, avg=92.30, stdev=27.85 00:35:47.081 clat percentiles (msec): 00:35:47.081 | 1.00th=[ 35], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 70], 00:35:47.081 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 96], 00:35:47.081 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 129], 95.00th=[ 136], 00:35:47.081 | 99.00th=[ 174], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:35:47.081 | 99.99th=[ 215] 00:35:47.081 bw ( KiB/s): min= 472, max= 894, per=4.23%, avg=691.60, stdev=116.78, samples=20 00:35:47.081 iops : min= 118, max= 223, avg=172.85, stdev=29.14, samples=20 00:35:47.081 lat (msec) : 50=2.65%, 100=64.82%, 250=32.53% 00:35:47.081 cpu : usr=32.88%, sys=0.76%, ctx=977, majf=0, minf=1636 00:35:47.081 IO depths : 1=2.0%, 2=4.2%, 4=11.8%, 8=70.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:47.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.081 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:47.081 00:35:47.081 Run status group 0 (all jobs): 00:35:47.081 READ: bw=15.9MiB/s (16.7MB/s), 588KiB/s-864KiB/s (602kB/s-885kB/s), io=161MiB (169MB), run=10003-10091msec 00:35:47.647 ----------------------------------------------------- 00:35:47.647 Suppressions used: 00:35:47.647 count bytes template 00:35:47.647 45 402 /usr/src/fio/parse.c 00:35:47.647 1 8 libtcmalloc_minimal.so 00:35:47.647 1 904 libcrypto.so 00:35:47.647 ----------------------------------------------------- 00:35:47.647 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 bdev_null0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 [2024-07-12 00:54:52.390277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 bdev_null1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:47.647 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.648 { 00:35:47.648 "params": { 00:35:47.648 "name": "Nvme$subsystem", 00:35:47.648 "trtype": "$TEST_TRANSPORT", 00:35:47.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.648 "adrfam": "ipv4", 00:35:47.648 "trsvcid": "$NVMF_PORT", 00:35:47.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.648 "hdgst": ${hdgst:-false}, 00:35:47.648 "ddgst": ${ddgst:-false} 00:35:47.648 }, 00:35:47.648 "method": "bdev_nvme_attach_controller" 00:35:47.648 } 00:35:47.648 EOF 00:35:47.648 )") 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.648 { 00:35:47.648 "params": { 00:35:47.648 "name": "Nvme$subsystem", 00:35:47.648 "trtype": "$TEST_TRANSPORT", 00:35:47.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.648 "adrfam": "ipv4", 00:35:47.648 "trsvcid": "$NVMF_PORT", 00:35:47.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.648 "hdgst": ${hdgst:-false}, 00:35:47.648 "ddgst": ${ddgst:-false} 00:35:47.648 }, 00:35:47.648 "method": "bdev_nvme_attach_controller" 00:35:47.648 } 00:35:47.648 EOF 00:35:47.648 )") 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:47.648 "params": { 00:35:47.648 "name": "Nvme0", 00:35:47.648 "trtype": "tcp", 00:35:47.648 "traddr": "10.0.0.2", 00:35:47.648 "adrfam": "ipv4", 00:35:47.648 "trsvcid": "4420", 00:35:47.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.648 "hdgst": false, 00:35:47.648 "ddgst": false 00:35:47.648 }, 00:35:47.648 "method": "bdev_nvme_attach_controller" 00:35:47.648 },{ 00:35:47.648 "params": { 00:35:47.648 "name": "Nvme1", 00:35:47.648 "trtype": "tcp", 00:35:47.648 "traddr": "10.0.0.2", 00:35:47.648 "adrfam": "ipv4", 00:35:47.648 "trsvcid": "4420", 00:35:47.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.648 "hdgst": false, 00:35:47.648 "ddgst": false 00:35:47.648 }, 00:35:47.648 "method": "bdev_nvme_attach_controller" 00:35:47.648 }' 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:47.648 00:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.906 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:47.906 ... 00:35:47.906 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:47.906 ... 00:35:47.906 fio-3.35 00:35:47.906 Starting 4 threads 00:35:54.464 00:35:54.464 filename0: (groupid=0, jobs=1): err= 0: pid=109825: Fri Jul 12 00:54:58 2024 00:35:54.464 read: IOPS=1532, BW=12.0MiB/s (12.6MB/s)(59.9MiB/5002msec) 00:35:54.464 slat (nsec): min=4564, max=68523, avg=16286.62, stdev=4895.71 00:35:54.464 clat (usec): min=3833, max=10862, avg=5141.65, stdev=507.06 00:35:54.464 lat (usec): min=3849, max=10878, avg=5157.94, stdev=507.00 00:35:54.464 clat percentiles (usec): 00:35:54.464 | 1.00th=[ 4686], 5.00th=[ 4883], 10.00th=[ 4883], 20.00th=[ 4948], 00:35:54.464 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5080], 00:35:54.464 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5866], 00:35:54.464 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 9241], 00:35:54.464 | 99.99th=[10814] 00:35:54.464 bw ( KiB/s): min=11136, max=12672, per=25.00%, avg=12259.56, stdev=473.68, samples=9 00:35:54.464 iops : min= 1392, max= 1584, avg=1532.44, stdev=59.21, samples=9 00:35:54.464 lat (msec) : 4=0.25%, 10=99.71%, 20=0.04% 00:35:54.464 cpu : usr=94.40%, sys=4.32%, ctx=88, majf=0, minf=1637 00:35:54.464 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 issued rwts: total=7664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.465 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.465 filename0: (groupid=0, jobs=1): err= 0: pid=109826: Fri Jul 12 00:54:58 2024 00:35:54.465 read: IOPS=1532, BW=12.0MiB/s (12.6MB/s)(59.9MiB/5002msec) 00:35:54.465 slat (nsec): min=5598, max=58599, avg=16149.56, stdev=5342.93 00:35:54.465 clat (usec): min=3523, max=10731, avg=5136.24, stdev=510.04 00:35:54.465 lat (usec): min=3542, max=10740, avg=5152.39, stdev=510.12 00:35:54.465 clat percentiles (usec): 00:35:54.465 | 1.00th=[ 4686], 5.00th=[ 4883], 10.00th=[ 4883], 20.00th=[ 4948], 00:35:54.465 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5080], 00:35:54.465 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5866], 00:35:54.465 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[10028], 00:35:54.465 | 99.99th=[10683] 00:35:54.465 bw ( KiB/s): min=11264, max=12672, per=25.01%, avg=12262.22, stdev=419.86, samples=9 00:35:54.465 iops : min= 1408, max= 1584, avg=1532.78, stdev=52.48, samples=9 00:35:54.465 lat (msec) : 4=0.22%, 10=99.73%, 20=0.05% 00:35:54.465 cpu : usr=93.64%, sys=5.14%, ctx=9, majf=0, minf=1635 00:35:54.465 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 issued rwts: total=7664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.465 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.465 filename1: (groupid=0, jobs=1): err= 0: pid=109827: Fri Jul 12 00:54:58 2024 00:35:54.465 read: IOPS=1532, BW=12.0MiB/s (12.6MB/s)(59.9MiB/5002msec) 00:35:54.465 slat (nsec): min=5631, max=63102, avg=11843.81, stdev=4917.48 00:35:54.465 clat (usec): min=2474, max=13544, avg=5157.05, stdev=577.80 00:35:54.465 lat (usec): min=2491, max=13555, avg=5168.89, stdev=577.78 00:35:54.465 clat percentiles (usec): 00:35:54.465 | 1.00th=[ 4686], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 4948], 00:35:54.465 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5014], 60.00th=[ 5080], 00:35:54.465 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5997], 00:35:54.465 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10421], 99.95th=[11469], 00:35:54.465 | 99.99th=[13566] 00:35:54.465 bw ( KiB/s): min=11264, max=12672, per=25.00%, avg=12259.56, stdev=433.02, samples=9 00:35:54.465 iops : min= 1408, max= 1584, avg=1532.44, stdev=54.13, samples=9 00:35:54.465 lat (msec) : 4=0.40%, 10=99.47%, 20=0.13% 00:35:54.465 cpu : usr=93.88%, sys=4.90%, ctx=7, majf=0, minf=1637 00:35:54.465 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 issued rwts: total=7664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.465 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.465 filename1: (groupid=0, jobs=1): err= 0: pid=109828: Fri Jul 12 00:54:58 2024 00:35:54.465 read: IOPS=1532, BW=12.0MiB/s (12.6MB/s)(59.9MiB/5002msec) 00:35:54.465 slat (nsec): min=5308, max=92087, avg=11853.69, stdev=5036.71 00:35:54.465 clat (usec): min=3653, max=10748, avg=5158.19, stdev=528.99 00:35:54.465 lat (usec): min=3671, max=10758, avg=5170.05, stdev=529.07 00:35:54.465 clat percentiles (usec): 00:35:54.465 | 1.00th=[ 4752], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 4948], 00:35:54.465 | 30.00th=[ 5014], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5080], 00:35:54.465 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5866], 00:35:54.465 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[10028], 00:35:54.465 | 99.99th=[10814] 00:35:54.465 bw ( KiB/s): min=11264, max=12672, per=25.00%, avg=12259.56, stdev=437.72, samples=9 00:35:54.465 iops : min= 1408, max= 1584, avg=1532.44, stdev=54.72, samples=9 00:35:54.465 lat (msec) : 4=0.37%, 10=99.58%, 20=0.05% 00:35:54.465 cpu : usr=94.26%, sys=4.48%, ctx=9, majf=0, minf=1637 00:35:54.465 IO depths : 1=11.6%, 2=25.0%, 4=50.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.465 issued rwts: total=7664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.465 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:54.465 00:35:54.465 Run status group 0 (all jobs): 00:35:54.465 READ: bw=47.9MiB/s (50.2MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=240MiB (251MB), run=5002-5002msec 00:35:55.400 ----------------------------------------------------- 00:35:55.400 Suppressions used: 00:35:55.400 count bytes template 00:35:55.400 6 52 /usr/src/fio/parse.c 00:35:55.400 1 8 libtcmalloc_minimal.so 00:35:55.400 1 904 libcrypto.so 00:35:55.400 ----------------------------------------------------- 00:35:55.400 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 ************************************ 00:35:55.400 END TEST fio_dif_rand_params 00:35:55.400 ************************************ 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:35:55.400 real 0m28.090s 00:35:55.400 user 2m10.664s 00:35:55.400 sys 0m5.193s 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:55.400 00:55:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:55.400 00:55:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:55.400 00:55:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 ************************************ 00:35:55.400 START TEST fio_dif_digest 00:35:55.400 ************************************ 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 bdev_null0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.400 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.401 [2024-07-12 00:55:00.128994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.401 { 00:35:55.401 "params": { 00:35:55.401 "name": "Nvme$subsystem", 00:35:55.401 "trtype": "$TEST_TRANSPORT", 00:35:55.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.401 "adrfam": "ipv4", 00:35:55.401 "trsvcid": "$NVMF_PORT", 00:35:55.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.401 "hdgst": ${hdgst:-false}, 00:35:55.401 "ddgst": ${ddgst:-false} 00:35:55.401 }, 00:35:55.401 "method": "bdev_nvme_attach_controller" 00:35:55.401 } 00:35:55.401 EOF 00:35:55.401 )") 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:55.401 "params": { 00:35:55.401 "name": "Nvme0", 00:35:55.401 "trtype": "tcp", 00:35:55.401 "traddr": "10.0.0.2", 00:35:55.401 "adrfam": "ipv4", 00:35:55.401 "trsvcid": "4420", 00:35:55.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.401 "hdgst": true, 00:35:55.401 "ddgst": true 00:35:55.401 }, 00:35:55.401 "method": "bdev_nvme_attach_controller" 00:35:55.401 }' 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:55.401 00:55:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.659 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:55.659 ... 00:35:55.659 fio-3.35 00:35:55.659 Starting 3 threads 00:36:07.879 00:36:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=109937: Fri Jul 12 00:55:11 2024 00:36:07.879 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(260MiB/10050msec) 00:36:07.879 slat (nsec): min=9413, max=51648, avg=20960.91, stdev=4665.42 00:36:07.879 clat (usec): min=10488, max=56776, avg=14477.20, stdev=2284.97 00:36:07.879 lat (usec): min=10507, max=56795, avg=14498.16, stdev=2284.86 00:36:07.879 clat percentiles (usec): 00:36:07.879 | 1.00th=[11731], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:36:07.879 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:36:07.879 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16188], 00:36:07.879 | 99.00th=[16909], 99.50th=[17695], 99.90th=[55837], 99.95th=[55837], 00:36:07.879 | 99.99th=[56886] 00:36:07.879 bw ( KiB/s): min=24064, max=27648, per=39.49%, avg=26544.60, stdev=764.25, samples=20 00:36:07.879 iops : min= 188, max= 216, avg=207.35, stdev= 6.00, samples=20 00:36:07.879 lat (msec) : 20=99.76%, 100=0.24% 00:36:07.879 cpu : usr=92.44%, sys=5.94%, ctx=42, majf=0, minf=1637 00:36:07.879 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=109938: Fri Jul 12 00:55:11 2024 00:36:07.879 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(186MiB/10007msec) 00:36:07.879 slat (nsec): min=6438, max=77557, avg=18458.76, stdev=6924.72 00:36:07.879 clat (usec): min=12138, max=23447, avg=20129.31, stdev=1121.05 00:36:07.879 lat (usec): min=12157, max=23468, avg=20147.76, stdev=1121.59 00:36:07.879 clat percentiles (usec): 00:36:07.879 | 1.00th=[17171], 5.00th=[18482], 10.00th=[19006], 20.00th=[19268], 00:36:07.879 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:36:07.879 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21365], 95.00th=[21627], 00:36:07.879 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23462], 99.95th=[23462], 00:36:07.879 | 99.99th=[23462] 00:36:07.879 bw ( KiB/s): min=18432, max=19968, per=28.35%, avg=19051.79, stdev=507.11, samples=19 00:36:07.879 iops : min= 144, max= 156, avg=148.84, stdev= 3.96, samples=19 00:36:07.879 lat (msec) : 20=43.52%, 50=56.48% 00:36:07.879 cpu : usr=93.12%, sys=5.51%, ctx=78, majf=0, minf=1635 00:36:07.879 IO depths : 1=14.2%, 2=85.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 issued rwts: total=1489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.879 filename0: (groupid=0, jobs=1): err= 0: pid=109939: Fri Jul 12 00:55:11 2024 00:36:07.879 read: IOPS=171, BW=21.4MiB/s (22.4MB/s)(214MiB/10008msec) 00:36:07.879 slat (nsec): min=6420, max=66310, avg=21125.78, stdev=6038.19 00:36:07.879 clat (usec): min=9490, max=21878, avg=17507.86, stdev=1545.57 00:36:07.879 lat (usec): min=9509, max=21893, avg=17528.98, stdev=1545.80 00:36:07.879 clat percentiles (usec): 00:36:07.879 | 1.00th=[13960], 5.00th=[15139], 10.00th=[15664], 20.00th=[16319], 00:36:07.879 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:36:07.879 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:36:07.879 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21365], 99.95th=[21890], 00:36:07.879 | 99.99th=[21890] 00:36:07.879 bw ( KiB/s): min=20736, max=22784, per=32.56%, avg=21883.42, stdev=664.64, samples=19 00:36:07.879 iops : min= 162, max= 178, avg=170.95, stdev= 5.22, samples=19 00:36:07.879 lat (msec) : 10=0.06%, 20=94.80%, 50=5.14% 00:36:07.879 cpu : usr=93.12%, sys=5.34%, ctx=14, majf=0, minf=1637 00:36:07.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.879 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.880 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.880 00:36:07.880 Run status group 0 (all jobs): 00:36:07.880 READ: bw=65.6MiB/s (68.8MB/s), 18.6MiB/s-25.8MiB/s (19.5MB/s-27.1MB/s), io=660MiB (692MB), run=10007-10050msec 00:36:07.880 ----------------------------------------------------- 00:36:07.880 Suppressions used: 00:36:07.880 count bytes template 00:36:07.880 5 44 /usr/src/fio/parse.c 00:36:07.880 1 8 libtcmalloc_minimal.so 00:36:07.880 1 904 libcrypto.so 00:36:07.880 ----------------------------------------------------- 00:36:07.880 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 ************************************ 00:36:07.880 END TEST fio_dif_digest 00:36:07.880 ************************************ 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.880 00:36:07.880 real 0m12.382s 00:36:07.880 user 0m29.826s 00:36:07.880 sys 0m2.097s 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:07.880 00:55:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:07.880 00:55:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:07.880 00:55:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:07.880 rmmod nvme_tcp 00:36:07.880 rmmod nvme_fabrics 00:36:07.880 rmmod nvme_keyring 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 109171 ']' 00:36:07.880 00:55:12 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 109171 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 109171 ']' 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 109171 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109171 00:36:07.880 killing process with pid 109171 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109171' 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@967 -- # kill 109171 00:36:07.880 00:55:12 nvmf_dif -- common/autotest_common.sh@972 -- # wait 109171 00:36:09.250 00:55:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:09.250 00:55:13 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:09.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:09.507 Waiting for block devices as requested 00:36:09.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:09.507 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.507 00:55:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:09.507 00:55:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.507 00:55:14 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:09.764 ************************************ 00:36:09.764 END TEST nvmf_dif 00:36:09.764 ************************************ 00:36:09.764 00:36:09.764 real 1m9.855s 00:36:09.764 user 4m10.861s 00:36:09.764 sys 0m14.742s 00:36:09.764 00:55:14 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.764 00:55:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:09.764 00:55:14 -- common/autotest_common.sh@1142 -- # return 0 00:36:09.764 00:55:14 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:09.764 00:55:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:09.764 00:55:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.764 00:55:14 -- common/autotest_common.sh@10 -- # set +x 00:36:09.764 ************************************ 00:36:09.764 START TEST nvmf_abort_qd_sizes 00:36:09.764 ************************************ 00:36:09.764 00:55:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:09.764 * Looking for test storage... 00:36:09.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:09.765 Cannot find device "nvmf_tgt_br" 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:09.765 Cannot find device "nvmf_tgt_br2" 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:09.765 Cannot find device "nvmf_tgt_br" 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:09.765 Cannot find device "nvmf_tgt_br2" 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:09.765 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:10.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:10.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:10.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:36:10.023 00:36:10.023 --- 10.0.0.2 ping statistics --- 00:36:10.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.023 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:10.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:10.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:36:10.023 00:36:10.023 --- 10.0.0.3 ping statistics --- 00:36:10.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.023 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:10.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:36:10.023 00:36:10.023 --- 10.0.0.1 ping statistics --- 00:36:10.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.023 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:10.023 00:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:10.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.953 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:10.953 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=110549 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 110549 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 110549 ']' 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:11.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:11.210 00:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.210 [2024-07-12 00:55:16.128869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:11.210 [2024-07-12 00:55:16.129081] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.466 [2024-07-12 00:55:16.312946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:11.722 [2024-07-12 00:55:16.620554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.722 [2024-07-12 00:55:16.620670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.722 [2024-07-12 00:55:16.620723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.722 [2024-07-12 00:55:16.620749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.722 [2024-07-12 00:55:16.620769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.722 [2024-07-12 00:55:16.621019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.722 [2024-07-12 00:55:16.621762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.722 [2024-07-12 00:55:16.621870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:11.722 [2024-07-12 00:55:16.621877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:12.286 00:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:12.286 ************************************ 00:36:12.286 START TEST spdk_target_abort 00:36:12.286 ************************************ 00:36:12.286 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:12.286 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:12.286 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:36:12.286 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.287 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.544 spdk_targetn1 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.544 [2024-07-12 00:55:17.291457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.544 [2024-07-12 00:55:17.333675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.544 00:55:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.899 Initializing NVMe Controllers 00:36:15.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:15.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:15.899 Initialization complete. Launching workers. 00:36:15.899 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8387, failed: 0 00:36:15.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1080, failed to submit 7307 00:36:15.899 success 732, unsuccess 348, failed 0 00:36:15.899 00:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:15.899 00:55:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.181 Initializing NVMe Controllers 00:36:19.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.181 Initialization complete. Launching workers. 00:36:19.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5867, failed: 0 00:36:19.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1316, failed to submit 4551 00:36:19.181 success 233, unsuccess 1083, failed 0 00:36:19.181 00:55:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:19.181 00:55:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.371 Initializing NVMe Controllers 00:36:23.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:23.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:23.371 Initialization complete. Launching workers. 00:36:23.371 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26748, failed: 0 00:36:23.371 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2688, failed to submit 24060 00:36:23.371 success 261, unsuccess 2427, failed 0 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110549 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 110549 ']' 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 110549 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110549 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:23.371 killing process with pid 110549 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110549' 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 110549 00:36:23.371 00:55:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 110549 00:36:24.307 00:36:24.307 real 0m11.748s 00:36:24.307 user 0m46.229s 00:36:24.307 sys 0m1.949s 00:36:24.307 00:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:24.307 ************************************ 00:36:24.307 END TEST spdk_target_abort 00:36:24.307 00:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.307 ************************************ 00:36:24.307 00:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:24.307 00:55:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:24.307 00:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:24.307 00:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:24.307 00:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:24.307 ************************************ 00:36:24.307 START TEST kernel_target_abort 00:36:24.307 ************************************ 00:36:24.307 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:24.307 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:24.308 00:55:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:24.308 00:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:24.308 00:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:24.308 00:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:24.308 00:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:24.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:24.566 Waiting for block devices as requested 00:36:24.566 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:24.824 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:36:25.391 No valid GPT data, bailing 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:36:25.391 No valid GPT data, bailing 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:36:25.391 No valid GPT data, bailing 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:36:25.391 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:36:25.649 No valid GPT data, bailing 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:25.649 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea --hostid=637b094c-7386-4bd8-8529-c89aa3aa2aea -a 10.0.0.1 -t tcp -s 4420 00:36:25.649 00:36:25.649 Discovery Log Number of Records 2, Generation counter 2 00:36:25.649 =====Discovery Log Entry 0====== 00:36:25.649 trtype: tcp 00:36:25.649 adrfam: ipv4 00:36:25.650 subtype: current discovery subsystem 00:36:25.650 treq: not specified, sq flow control disable supported 00:36:25.650 portid: 1 00:36:25.650 trsvcid: 4420 00:36:25.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:25.650 traddr: 10.0.0.1 00:36:25.650 eflags: none 00:36:25.650 sectype: none 00:36:25.650 =====Discovery Log Entry 1====== 00:36:25.650 trtype: tcp 00:36:25.650 adrfam: ipv4 00:36:25.650 subtype: nvme subsystem 00:36:25.650 treq: not specified, sq flow control disable supported 00:36:25.650 portid: 1 00:36:25.650 trsvcid: 4420 00:36:25.650 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:25.650 traddr: 10.0.0.1 00:36:25.650 eflags: none 00:36:25.650 sectype: none 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.650 00:55:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:28.936 Initializing NVMe Controllers 00:36:28.936 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.936 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.936 Initialization complete. Launching workers. 00:36:28.936 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24348, failed: 0 00:36:28.936 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24348, failed to submit 0 00:36:28.936 success 0, unsuccess 24348, failed 0 00:36:28.936 00:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:28.936 00:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.219 Initializing NVMe Controllers 00:36:32.219 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:32.219 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:32.219 Initialization complete. Launching workers. 00:36:32.219 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54696, failed: 0 00:36:32.219 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24044, failed to submit 30652 00:36:32.219 success 0, unsuccess 24044, failed 0 00:36:32.219 00:55:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:32.220 00:55:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.503 Initializing NVMe Controllers 00:36:35.503 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:35.503 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:35.503 Initialization complete. Launching workers. 00:36:35.503 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66634, failed: 0 00:36:35.503 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16638, failed to submit 49996 00:36:35.503 success 0, unsuccess 16638, failed 0 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:35.503 00:55:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:36.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:37.005 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:37.005 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:37.263 00:36:37.263 real 0m13.008s 00:36:37.263 user 0m6.967s 00:36:37.263 sys 0m3.801s 00:36:37.263 00:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:37.263 00:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.263 ************************************ 00:36:37.263 END TEST kernel_target_abort 00:36:37.263 ************************************ 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:37.263 rmmod nvme_tcp 00:36:37.263 rmmod nvme_fabrics 00:36:37.263 rmmod nvme_keyring 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:37.263 Process with pid 110549 is not found 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 110549 ']' 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 110549 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 110549 ']' 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 110549 00:36:37.263 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (110549) - No such process 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 110549 is not found' 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:37.263 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:37.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:37.829 Waiting for block devices as requested 00:36:37.829 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:37.829 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:37.829 ************************************ 00:36:37.829 END TEST nvmf_abort_qd_sizes 00:36:37.829 ************************************ 00:36:37.829 00:36:37.829 real 0m28.252s 00:36:37.829 user 0m54.523s 00:36:37.829 sys 0m7.174s 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:37.829 00:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:38.088 00:55:42 -- common/autotest_common.sh@1142 -- # return 0 00:36:38.088 00:55:42 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:38.088 00:55:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:38.088 00:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:38.088 00:55:42 -- common/autotest_common.sh@10 -- # set +x 00:36:38.088 ************************************ 00:36:38.088 START TEST keyring_file 00:36:38.088 ************************************ 00:36:38.088 00:55:42 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:38.088 * Looking for test storage... 00:36:38.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:38.088 00:55:42 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:38.088 00:55:42 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.088 00:55:42 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:38.088 00:55:42 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.088 00:55:42 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.088 00:55:42 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.088 00:55:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.088 00:55:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.088 00:55:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.089 00:55:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:38.089 00:55:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IkTl48dRp0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IkTl48dRp0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IkTl48dRp0 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.IkTl48dRp0 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jb8pvw9SMb 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:38.089 00:55:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jb8pvw9SMb 00:36:38.089 00:55:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jb8pvw9SMb 00:36:38.089 00:55:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jb8pvw9SMb 00:36:38.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.089 00:55:43 keyring_file -- keyring/file.sh@30 -- # tgtpid=111635 00:36:38.089 00:55:43 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111635 00:36:38.089 00:55:43 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 111635 ']' 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:38.089 00:55:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:38.346 [2024-07-12 00:55:43.151467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:38.347 [2024-07-12 00:55:43.152817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111635 ] 00:36:38.604 [2024-07-12 00:55:43.333827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.862 [2024-07-12 00:55:43.696607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:39.795 00:55:44 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.795 [2024-07-12 00:55:44.514865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.795 null0 00:36:39.795 [2024-07-12 00:55:44.546810] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:39.795 [2024-07-12 00:55:44.547155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:39.795 [2024-07-12 00:55:44.554834] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.795 00:55:44 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.795 [2024-07-12 00:55:44.566852] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:39.795 2024/07/12 00:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:36:39.795 request: 00:36:39.795 { 00:36:39.795 "method": "nvmf_subsystem_add_listener", 00:36:39.795 "params": { 00:36:39.795 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.795 "secure_channel": false, 00:36:39.795 "listen_address": { 00:36:39.795 "trtype": "tcp", 00:36:39.795 "traddr": "127.0.0.1", 00:36:39.795 "trsvcid": "4420" 00:36:39.795 } 00:36:39.795 } 00:36:39.795 } 00:36:39.795 Got JSON-RPC error response 00:36:39.795 GoRPCClient: error on JSON-RPC call 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:39.795 00:55:44 keyring_file -- keyring/file.sh@46 -- # bperfpid=111670 00:36:39.795 00:55:44 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:39.795 00:55:44 keyring_file -- keyring/file.sh@48 -- # waitforlisten 111670 /var/tmp/bperf.sock 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 111670 ']' 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:39.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:39.795 00:55:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:39.795 [2024-07-12 00:55:44.699758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:39.795 [2024-07-12 00:55:44.699922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111670 ] 00:36:40.053 [2024-07-12 00:55:44.866702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.310 [2024-07-12 00:55:45.152598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.879 00:55:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:40.879 00:55:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:40.879 00:55:45 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:40.879 00:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:41.155 00:55:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jb8pvw9SMb 00:36:41.155 00:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jb8pvw9SMb 00:36:41.442 00:55:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:41.442 00:55:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:41.442 00:55:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.442 00:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.442 00:55:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.698 00:55:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.IkTl48dRp0 == \/\t\m\p\/\t\m\p\.\I\k\T\l\4\8\d\R\p\0 ]] 00:36:41.698 00:55:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:41.698 00:55:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:41.698 00:55:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.699 00:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.699 00:55:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.956 00:55:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jb8pvw9SMb == \/\t\m\p\/\t\m\p\.\j\b\8\p\v\w\9\S\M\b ]] 00:36:41.956 00:55:46 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:41.956 00:55:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.956 00:55:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.956 00:55:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.956 00:55:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.956 00:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.213 00:55:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:42.213 00:55:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:42.213 00:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:42.213 00:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.213 00:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.213 00:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.213 00:55:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.470 00:55:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:42.470 00:55:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.470 00:55:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.727 [2024-07-12 00:55:47.603324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:42.983 nvme0n1 00:36:42.983 00:55:47 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:42.983 00:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.983 00:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.983 00:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.983 00:55:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.983 00:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.240 00:55:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:43.240 00:55:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:43.240 00:55:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.240 00:55:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.240 00:55:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.240 00:55:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.240 00:55:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.497 00:55:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:43.497 00:55:48 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:43.497 Running I/O for 1 seconds... 00:36:44.870 00:36:44.870 Latency(us) 00:36:44.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.870 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:44.870 nvme0n1 : 1.02 6761.32 26.41 0.00 0.00 18747.75 8996.31 25976.09 00:36:44.870 =================================================================================================================== 00:36:44.870 Total : 6761.32 26.41 0.00 0.00 18747.75 8996.31 25976.09 00:36:44.870 0 00:36:44.870 00:55:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:44.870 00:55:49 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.870 00:55:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.128 00:55:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:45.128 00:55:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:45.128 00:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:45.128 00:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.128 00:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.128 00:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.128 00:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.695 00:55:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:45.695 00:55:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.695 [2024-07-12 00:55:50.558420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 [2024-07-12 00:55:50.558409] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd(107): Transport endpoint is not connected 00:36:45.695 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:45.695 [2024-07-12 00:55:50.559356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:36:45.695 [2024-07-12 00:55:50.560351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:45.695 [2024-07-12 00:55:50.560384] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:45.695 [2024-07-12 00:55:50.560410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:45.695 2024/07/12 00:55:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:45.695 request: 00:36:45.695 { 00:36:45.695 "method": "bdev_nvme_attach_controller", 00:36:45.695 "params": { 00:36:45.695 "name": "nvme0", 00:36:45.695 "trtype": "tcp", 00:36:45.695 "traddr": "127.0.0.1", 00:36:45.695 "adrfam": "ipv4", 00:36:45.695 "trsvcid": "4420", 00:36:45.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.695 "prchk_reftag": false, 00:36:45.695 "prchk_guard": false, 00:36:45.695 "hdgst": false, 00:36:45.695 "ddgst": false, 00:36:45.695 "psk": "key1" 00:36:45.695 } 00:36:45.695 } 00:36:45.695 Got JSON-RPC error response 00:36:45.695 GoRPCClient: error on JSON-RPC call 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:45.695 00:55:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:45.695 00:55:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.695 00:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.954 00:55:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:45.954 00:55:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:45.954 00:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:45.954 00:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.954 00:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.954 00:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.954 00:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.522 00:55:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:46.522 00:55:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:46.522 00:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:46.522 00:55:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:46.522 00:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:46.780 00:55:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:46.780 00:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.780 00:55:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:47.038 00:55:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:47.038 00:55:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.IkTl48dRp0 00:36:47.038 00:55:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.038 00:55:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.038 00:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.297 [2024-07-12 00:55:52.174408] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IkTl48dRp0': 0100660 00:36:47.297 [2024-07-12 00:55:52.174474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:47.297 2024/07/12 00:55:52 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.IkTl48dRp0], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:36:47.297 request: 00:36:47.297 { 00:36:47.297 "method": "keyring_file_add_key", 00:36:47.297 "params": { 00:36:47.297 "name": "key0", 00:36:47.297 "path": "/tmp/tmp.IkTl48dRp0" 00:36:47.297 } 00:36:47.297 } 00:36:47.297 Got JSON-RPC error response 00:36:47.297 GoRPCClient: error on JSON-RPC call 00:36:47.297 00:55:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:47.297 00:55:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:47.297 00:55:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:47.297 00:55:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:47.297 00:55:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.IkTl48dRp0 00:36:47.297 00:55:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.297 00:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IkTl48dRp0 00:36:47.865 00:55:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.IkTl48dRp0 00:36:47.865 00:55:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:47.865 00:55:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.865 00:55:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.865 00:55:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.865 00:55:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.865 00:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.124 00:55:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:48.124 00:55:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:48.124 00:55:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.124 00:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.124 [2024-07-12 00:55:53.050682] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.IkTl48dRp0': No such file or directory 00:36:48.124 [2024-07-12 00:55:53.050744] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:48.124 [2024-07-12 00:55:53.050780] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:48.124 [2024-07-12 00:55:53.050794] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:48.124 [2024-07-12 00:55:53.050810] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:48.124 2024/07/12 00:55:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:36:48.124 request: 00:36:48.124 { 00:36:48.124 "method": "bdev_nvme_attach_controller", 00:36:48.124 "params": { 00:36:48.124 "name": "nvme0", 00:36:48.124 "trtype": "tcp", 00:36:48.124 "traddr": "127.0.0.1", 00:36:48.124 "adrfam": "ipv4", 00:36:48.124 "trsvcid": "4420", 00:36:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.124 "prchk_reftag": false, 00:36:48.124 "prchk_guard": false, 00:36:48.124 "hdgst": false, 00:36:48.124 "ddgst": false, 00:36:48.124 "psk": "key0" 00:36:48.124 } 00:36:48.125 } 00:36:48.125 Got JSON-RPC error response 00:36:48.125 GoRPCClient: error on JSON-RPC call 00:36:48.383 00:55:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:48.383 00:55:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:48.383 00:55:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:48.383 00:55:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:48.383 00:55:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:48.383 00:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:48.641 00:55:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kO35HjwNYO 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:48.641 00:55:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kO35HjwNYO 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kO35HjwNYO 00:36:48.641 00:55:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.kO35HjwNYO 00:36:48.641 00:55:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kO35HjwNYO 00:36:48.641 00:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kO35HjwNYO 00:36:48.899 00:55:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.899 00:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.157 nvme0n1 00:36:49.157 00:55:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:49.157 00:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.157 00:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.157 00:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.157 00:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.157 00:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.416 00:55:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:49.416 00:55:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:49.416 00:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.680 00:55:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:49.680 00:55:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:49.680 00:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.680 00:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.680 00:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.939 00:55:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:49.939 00:55:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:49.939 00:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.939 00:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.939 00:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.939 00:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.939 00:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.197 00:55:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:50.197 00:55:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:50.197 00:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:50.456 00:55:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:50.456 00:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.456 00:55:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:51.023 00:55:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:51.023 00:55:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kO35HjwNYO 00:36:51.023 00:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kO35HjwNYO 00:36:51.023 00:55:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jb8pvw9SMb 00:36:51.023 00:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jb8pvw9SMb 00:36:51.282 00:55:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.282 00:55:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.849 nvme0n1 00:36:51.849 00:55:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:51.849 00:55:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:52.108 00:55:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:52.108 "subsystems": [ 00:36:52.108 { 00:36:52.108 "subsystem": "keyring", 00:36:52.108 "config": [ 00:36:52.108 { 00:36:52.108 "method": "keyring_file_add_key", 00:36:52.108 "params": { 00:36:52.108 "name": "key0", 00:36:52.108 "path": "/tmp/tmp.kO35HjwNYO" 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "keyring_file_add_key", 00:36:52.108 "params": { 00:36:52.108 "name": "key1", 00:36:52.108 "path": "/tmp/tmp.jb8pvw9SMb" 00:36:52.108 } 00:36:52.108 } 00:36:52.108 ] 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "subsystem": "iobuf", 00:36:52.108 "config": [ 00:36:52.108 { 00:36:52.108 "method": "iobuf_set_options", 00:36:52.108 "params": { 00:36:52.108 "large_bufsize": 135168, 00:36:52.108 "large_pool_count": 1024, 00:36:52.108 "small_bufsize": 8192, 00:36:52.108 "small_pool_count": 8192 00:36:52.108 } 00:36:52.108 } 00:36:52.108 ] 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "subsystem": "sock", 00:36:52.108 "config": [ 00:36:52.108 { 00:36:52.108 "method": "sock_set_default_impl", 00:36:52.108 "params": { 00:36:52.108 "impl_name": "posix" 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "sock_impl_set_options", 00:36:52.108 "params": { 00:36:52.108 "enable_ktls": false, 00:36:52.108 "enable_placement_id": 0, 00:36:52.108 "enable_quickack": false, 00:36:52.108 "enable_recv_pipe": true, 00:36:52.108 "enable_zerocopy_send_client": false, 00:36:52.108 "enable_zerocopy_send_server": true, 00:36:52.108 "impl_name": "ssl", 00:36:52.108 "recv_buf_size": 4096, 00:36:52.108 "send_buf_size": 4096, 00:36:52.108 "tls_version": 0, 00:36:52.108 "zerocopy_threshold": 0 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "sock_impl_set_options", 00:36:52.108 "params": { 00:36:52.108 "enable_ktls": false, 00:36:52.108 "enable_placement_id": 0, 00:36:52.108 "enable_quickack": false, 00:36:52.108 "enable_recv_pipe": true, 00:36:52.108 "enable_zerocopy_send_client": false, 00:36:52.108 "enable_zerocopy_send_server": true, 00:36:52.108 "impl_name": "posix", 00:36:52.108 "recv_buf_size": 2097152, 00:36:52.108 "send_buf_size": 2097152, 00:36:52.108 "tls_version": 0, 00:36:52.108 "zerocopy_threshold": 0 00:36:52.108 } 00:36:52.108 } 00:36:52.108 ] 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "subsystem": "vmd", 00:36:52.108 "config": [] 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "subsystem": "accel", 00:36:52.108 "config": [ 00:36:52.108 { 00:36:52.108 "method": "accel_set_options", 00:36:52.108 "params": { 00:36:52.108 "buf_count": 2048, 00:36:52.108 "large_cache_size": 16, 00:36:52.108 "sequence_count": 2048, 00:36:52.108 "small_cache_size": 128, 00:36:52.108 "task_count": 2048 00:36:52.108 } 00:36:52.108 } 00:36:52.108 ] 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "subsystem": "bdev", 00:36:52.108 "config": [ 00:36:52.108 { 00:36:52.108 "method": "bdev_set_options", 00:36:52.108 "params": { 00:36:52.108 "bdev_auto_examine": true, 00:36:52.108 "bdev_io_cache_size": 256, 00:36:52.108 "bdev_io_pool_size": 65535, 00:36:52.108 "iobuf_large_cache_size": 16, 00:36:52.108 "iobuf_small_cache_size": 128 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "bdev_raid_set_options", 00:36:52.108 "params": { 00:36:52.108 "process_window_size_kb": 1024 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "bdev_iscsi_set_options", 00:36:52.108 "params": { 00:36:52.108 "timeout_sec": 30 00:36:52.108 } 00:36:52.108 }, 00:36:52.108 { 00:36:52.108 "method": "bdev_nvme_set_options", 00:36:52.108 "params": { 00:36:52.108 "action_on_timeout": "none", 00:36:52.108 "allow_accel_sequence": false, 00:36:52.108 "arbitration_burst": 0, 00:36:52.108 "bdev_retry_count": 3, 00:36:52.108 "ctrlr_loss_timeout_sec": 0, 00:36:52.108 "delay_cmd_submit": true, 00:36:52.108 "dhchap_dhgroups": [ 00:36:52.108 "null", 00:36:52.108 "ffdhe2048", 00:36:52.108 "ffdhe3072", 00:36:52.108 "ffdhe4096", 00:36:52.108 "ffdhe6144", 00:36:52.108 "ffdhe8192" 00:36:52.108 ], 00:36:52.108 "dhchap_digests": [ 00:36:52.108 "sha256", 00:36:52.108 "sha384", 00:36:52.108 "sha512" 00:36:52.109 ], 00:36:52.109 "disable_auto_failback": false, 00:36:52.109 "fast_io_fail_timeout_sec": 0, 00:36:52.109 "generate_uuids": false, 00:36:52.109 "high_priority_weight": 0, 00:36:52.109 "io_path_stat": false, 00:36:52.109 "io_queue_requests": 512, 00:36:52.109 "keep_alive_timeout_ms": 10000, 00:36:52.109 "low_priority_weight": 0, 00:36:52.109 "medium_priority_weight": 0, 00:36:52.109 "nvme_adminq_poll_period_us": 10000, 00:36:52.109 "nvme_error_stat": false, 00:36:52.109 "nvme_ioq_poll_period_us": 0, 00:36:52.109 "rdma_cm_event_timeout_ms": 0, 00:36:52.109 "rdma_max_cq_size": 0, 00:36:52.109 "rdma_srq_size": 0, 00:36:52.109 "reconnect_delay_sec": 0, 00:36:52.109 "timeout_admin_us": 0, 00:36:52.109 "timeout_us": 0, 00:36:52.109 "transport_ack_timeout": 0, 00:36:52.109 "transport_retry_count": 4, 00:36:52.109 "transport_tos": 0 00:36:52.109 } 00:36:52.109 }, 00:36:52.109 { 00:36:52.109 "method": "bdev_nvme_attach_controller", 00:36:52.109 "params": { 00:36:52.109 "adrfam": "IPv4", 00:36:52.109 "ctrlr_loss_timeout_sec": 0, 00:36:52.109 "ddgst": false, 00:36:52.109 "fast_io_fail_timeout_sec": 0, 00:36:52.109 "hdgst": false, 00:36:52.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.109 "name": "nvme0", 00:36:52.109 "prchk_guard": false, 00:36:52.109 "prchk_reftag": false, 00:36:52.109 "psk": "key0", 00:36:52.109 "reconnect_delay_sec": 0, 00:36:52.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.109 "traddr": "127.0.0.1", 00:36:52.109 "trsvcid": "4420", 00:36:52.109 "trtype": "TCP" 00:36:52.109 } 00:36:52.109 }, 00:36:52.109 { 00:36:52.109 "method": "bdev_nvme_set_hotplug", 00:36:52.109 "params": { 00:36:52.109 "enable": false, 00:36:52.109 "period_us": 100000 00:36:52.109 } 00:36:52.109 }, 00:36:52.109 { 00:36:52.109 "method": "bdev_wait_for_examine" 00:36:52.109 } 00:36:52.109 ] 00:36:52.109 }, 00:36:52.109 { 00:36:52.109 "subsystem": "nbd", 00:36:52.109 "config": [] 00:36:52.109 } 00:36:52.109 ] 00:36:52.109 }' 00:36:52.109 00:55:56 keyring_file -- keyring/file.sh@114 -- # killprocess 111670 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 111670 ']' 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@952 -- # kill -0 111670 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111670 00:36:52.109 killing process with pid 111670 00:36:52.109 Received shutdown signal, test time was about 1.000000 seconds 00:36:52.109 00:36:52.109 Latency(us) 00:36:52.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.109 =================================================================================================================== 00:36:52.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111670' 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@967 -- # kill 111670 00:36:52.109 00:55:56 keyring_file -- common/autotest_common.sh@972 -- # wait 111670 00:36:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.044 00:55:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=112153 00:36:53.044 00:55:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 112153 /var/tmp/bperf.sock 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 112153 ']' 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:53.044 00:55:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:53.045 00:55:57 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:53.045 00:55:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:53.045 "subsystems": [ 00:36:53.045 { 00:36:53.045 "subsystem": "keyring", 00:36:53.045 "config": [ 00:36:53.045 { 00:36:53.045 "method": "keyring_file_add_key", 00:36:53.045 "params": { 00:36:53.045 "name": "key0", 00:36:53.045 "path": "/tmp/tmp.kO35HjwNYO" 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "keyring_file_add_key", 00:36:53.045 "params": { 00:36:53.045 "name": "key1", 00:36:53.045 "path": "/tmp/tmp.jb8pvw9SMb" 00:36:53.045 } 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "iobuf", 00:36:53.045 "config": [ 00:36:53.045 { 00:36:53.045 "method": "iobuf_set_options", 00:36:53.045 "params": { 00:36:53.045 "large_bufsize": 135168, 00:36:53.045 "large_pool_count": 1024, 00:36:53.045 "small_bufsize": 8192, 00:36:53.045 "small_pool_count": 8192 00:36:53.045 } 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "sock", 00:36:53.045 "config": [ 00:36:53.045 { 00:36:53.045 "method": "sock_set_default_impl", 00:36:53.045 "params": { 00:36:53.045 "impl_name": "posix" 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "sock_impl_set_options", 00:36:53.045 "params": { 00:36:53.045 "enable_ktls": false, 00:36:53.045 "enable_placement_id": 0, 00:36:53.045 "enable_quickack": false, 00:36:53.045 "enable_recv_pipe": true, 00:36:53.045 "enable_zerocopy_send_client": false, 00:36:53.045 "enable_zerocopy_send_server": true, 00:36:53.045 "impl_name": "ssl", 00:36:53.045 "recv_buf_size": 4096, 00:36:53.045 "send_buf_size": 4096, 00:36:53.045 "tls_version": 0, 00:36:53.045 "zerocopy_threshold": 0 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "sock_impl_set_options", 00:36:53.045 "params": { 00:36:53.045 "enable_ktls": false, 00:36:53.045 "enable_placement_id": 0, 00:36:53.045 "enable_quickack": false, 00:36:53.045 "enable_recv_pipe": true, 00:36:53.045 "enable_zerocopy_send_client": false, 00:36:53.045 "enable_zerocopy_send_server": true, 00:36:53.045 "impl_name": "posix", 00:36:53.045 "recv_buf_size": 2097152, 00:36:53.045 "send_buf_size": 2097152, 00:36:53.045 "tls_version": 0, 00:36:53.045 "zerocopy_threshold": 0 00:36:53.045 } 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "vmd", 00:36:53.045 "config": [] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "accel", 00:36:53.045 "config": [ 00:36:53.045 { 00:36:53.045 "method": "accel_set_options", 00:36:53.045 "params": { 00:36:53.045 "buf_count": 2048, 00:36:53.045 "large_cache_size": 16, 00:36:53.045 "sequence_count": 2048, 00:36:53.045 "small_cache_size": 128, 00:36:53.045 "task_count": 2048 00:36:53.045 } 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "bdev", 00:36:53.045 "config": [ 00:36:53.045 { 00:36:53.045 "method": "bdev_set_options", 00:36:53.045 "params": { 00:36:53.045 "bdev_auto_examine": true, 00:36:53.045 "bdev_io_cache_size": 256, 00:36:53.045 "bdev_io_pool_size": 65535, 00:36:53.045 "iobuf_large_cache_size": 16, 00:36:53.045 "iobuf_small_cache_size": 128 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_raid_set_options", 00:36:53.045 "params": { 00:36:53.045 "process_window_size_kb": 1024 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_iscsi_set_options", 00:36:53.045 "params": { 00:36:53.045 "timeout_sec": 30 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_nvme_set_options", 00:36:53.045 "params": { 00:36:53.045 "action_on_timeout": "none", 00:36:53.045 "allow_accel_sequence": false, 00:36:53.045 "arbitration_burst": 0, 00:36:53.045 "bdev_retry_count": 3, 00:36:53.045 "ctrlr_loss_timeout_sec": 0, 00:36:53.045 "delay_cmd_submit": true, 00:36:53.045 "dhchap_dhgroups": [ 00:36:53.045 "null", 00:36:53.045 "ffdhe2048", 00:36:53.045 "ffdhe3072", 00:36:53.045 "ffdhe4096", 00:36:53.045 "ffdhe6144", 00:36:53.045 "ffdhe8192" 00:36:53.045 ], 00:36:53.045 "dhchap_digests": [ 00:36:53.045 "sha256", 00:36:53.045 "sha384", 00:36:53.045 "sha512" 00:36:53.045 ], 00:36:53.045 "disable_auto_failback": false, 00:36:53.045 "fast_io_fail_timeout_sec": 0, 00:36:53.045 "generate_uuids": false, 00:36:53.045 "high_priority_weight": 0, 00:36:53.045 "io_path_stat": false, 00:36:53.045 "io_queue_requests": 512, 00:36:53.045 "keep_alive_timeout_ms": 10000, 00:36:53.045 "low_priority_weight": 0, 00:36:53.045 "medium_priority_weight": 0, 00:36:53.045 "nvme_adminq_poll_period_us": 10000, 00:36:53.045 "nvme_error_stat": false, 00:36:53.045 "nvme_ioq_poll_period_us": 0, 00:36:53.045 "rdma_cm_event_timeout_ms": 0, 00:36:53.045 "rdma_max_cq_size": 0, 00:36:53.045 "rdma_srq_size": 0, 00:36:53.045 "reconnect_delay_sec": 0, 00:36:53.045 "timeout_admin_us": 0, 00:36:53.045 "timeout_us": 0, 00:36:53.045 "transport_ack_timeout": 0, 00:36:53.045 "transport_retry_count": 4, 00:36:53.045 "transport_tos": 0 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_nvme_attach_controller", 00:36:53.045 "params": { 00:36:53.045 "adrfam": "IPv4", 00:36:53.045 "ctrlr_loss_timeout_sec": 0, 00:36:53.045 "ddgst": false, 00:36:53.045 "fast_io_fail_timeout_sec": 0, 00:36:53.045 "hdgst": false, 00:36:53.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.045 "name": "nvme0", 00:36:53.045 "prchk_guard": false, 00:36:53.045 "prchk_reftag": false, 00:36:53.045 "psk": "key0", 00:36:53.045 "reconnect_delay_sec": 0, 00:36:53.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.045 "traddr": "127.0.0.1", 00:36:53.045 "trsvcid": "4420", 00:36:53.045 "trtype": "TCP" 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_nvme_set_hotplug", 00:36:53.045 "params": { 00:36:53.045 "enable": false, 00:36:53.045 "period_us": 100000 00:36:53.045 } 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "method": "bdev_wait_for_examine" 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }, 00:36:53.045 { 00:36:53.045 "subsystem": "nbd", 00:36:53.045 "config": [] 00:36:53.045 } 00:36:53.045 ] 00:36:53.045 }' 00:36:53.304 [2024-07-12 00:55:58.076049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:53.304 [2024-07-12 00:55:58.076234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112153 ] 00:36:53.564 [2024-07-12 00:55:58.244739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.564 [2024-07-12 00:55:58.494669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.132 [2024-07-12 00:55:58.914226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:54.133 00:55:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:54.133 00:55:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:54.133 00:55:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:54.133 00:55:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.133 00:55:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:54.391 00:55:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:54.391 00:55:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:54.391 00:55:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:54.391 00:55:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.391 00:55:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.391 00:55:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.391 00:55:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.959 00:55:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:54.959 00:55:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:54.959 00:55:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:54.959 00:55:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.959 00:55:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.959 00:55:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.959 00:55:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.959 00:55:59 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:55.218 00:55:59 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:55.218 00:55:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:55.218 00:55:59 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:55.476 00:56:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:55.476 00:56:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:55.476 00:56:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kO35HjwNYO /tmp/tmp.jb8pvw9SMb 00:36:55.476 00:56:00 keyring_file -- keyring/file.sh@20 -- # killprocess 112153 00:36:55.476 00:56:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 112153 ']' 00:36:55.476 00:56:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 112153 00:36:55.476 00:56:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:55.476 00:56:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:55.476 00:56:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112153 00:36:55.476 killing process with pid 112153 00:36:55.476 Received shutdown signal, test time was about 1.000000 seconds 00:36:55.477 00:36:55.477 Latency(us) 00:36:55.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.477 =================================================================================================================== 00:36:55.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:55.477 00:56:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:55.477 00:56:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:55.477 00:56:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112153' 00:36:55.477 00:56:00 keyring_file -- common/autotest_common.sh@967 -- # kill 112153 00:36:55.477 00:56:00 keyring_file -- common/autotest_common.sh@972 -- # wait 112153 00:36:56.853 00:56:01 keyring_file -- keyring/file.sh@21 -- # killprocess 111635 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 111635 ']' 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 111635 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111635 00:36:56.853 killing process with pid 111635 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111635' 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@967 -- # kill 111635 00:36:56.853 [2024-07-12 00:56:01.510301] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:56.853 00:56:01 keyring_file -- common/autotest_common.sh@972 -- # wait 111635 00:36:59.396 00:36:59.396 real 0m21.240s 00:36:59.396 user 0m47.743s 00:36:59.396 sys 0m3.829s 00:36:59.396 00:56:04 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.396 00:56:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.396 ************************************ 00:36:59.396 END TEST keyring_file 00:36:59.396 ************************************ 00:36:59.396 00:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:36:59.396 00:56:04 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:59.396 00:56:04 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:59.396 00:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:59.396 00:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:59.396 00:56:04 -- common/autotest_common.sh@10 -- # set +x 00:36:59.396 ************************************ 00:36:59.396 START TEST keyring_linux 00:36:59.396 ************************************ 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:59.396 * Looking for test storage... 00:36:59.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=637b094c-7386-4bd8-8529-c89aa3aa2aea 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:59.396 00:56:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.396 00:56:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.396 00:56:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.396 00:56:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.396 00:56:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.396 00:56:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.396 00:56:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:59.396 00:56:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:59.396 /tmp/:spdk-test:key0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:59.396 00:56:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:59.396 /tmp/:spdk-test:key1 00:36:59.396 00:56:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112342 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:59.396 00:56:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112342 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 112342 ']' 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:59.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.396 00:56:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:59.397 00:56:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:59.654 [2024-07-12 00:56:04.429680] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:59.654 [2024-07-12 00:56:04.429846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112342 ] 00:36:59.913 [2024-07-12 00:56:04.598012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.171 [2024-07-12 00:56:04.892881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.129 [2024-07-12 00:56:05.737104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.129 null0 00:37:01.129 [2024-07-12 00:56:05.769047] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:01.129 [2024-07-12 00:56:05.769385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:01.129 694813520 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:01.129 906429400 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112378 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112378 /var/tmp/bperf.sock 00:37:01.129 00:56:05 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 112378 ']' 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.129 00:56:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.129 [2024-07-12 00:56:05.924759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:01.129 [2024-07-12 00:56:05.924946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112378 ] 00:37:01.387 [2024-07-12 00:56:06.105542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.645 [2024-07-12 00:56:06.376168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.903 00:56:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:01.903 00:56:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:01.903 00:56:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:01.903 00:56:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:02.163 00:56:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:02.420 00:56:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:02.987 00:56:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:02.987 00:56:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:02.987 [2024-07-12 00:56:07.907778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:03.288 nvme0n1 00:37:03.288 00:56:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:03.288 00:56:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:03.288 00:56:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:03.288 00:56:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:03.288 00:56:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:03.288 00:56:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.583 00:56:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:03.583 00:56:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:03.583 00:56:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:03.583 00:56:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:03.583 00:56:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.583 00:56:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.583 00:56:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@25 -- # sn=694813520 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 694813520 == \6\9\4\8\1\3\5\2\0 ]] 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 694813520 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:03.842 00:56:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.842 Running I/O for 1 seconds... 00:37:05.217 00:37:05.217 Latency(us) 00:37:05.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.217 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:05.217 nvme0n1 : 1.02 6447.29 25.18 0.00 0.00 19647.33 5779.08 22639.71 00:37:05.217 =================================================================================================================== 00:37:05.217 Total : 6447.29 25.18 0.00 0.00 19647.33 5779.08 22639.71 00:37:05.217 0 00:37:05.217 00:56:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:05.217 00:56:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:05.217 00:56:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:05.217 00:56:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:05.217 00:56:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:05.217 00:56:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:05.217 00:56:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:05.217 00:56:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.476 00:56:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:05.476 00:56:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:05.476 00:56:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:05.476 00:56:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.476 00:56:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.476 00:56:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.042 [2024-07-12 00:56:10.679272] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:06.042 [2024-07-12 00:56:10.679315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:37:06.042 [2024-07-12 00:56:10.680262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:37:06.042 [2024-07-12 00:56:10.681257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:06.042 [2024-07-12 00:56:10.681311] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:06.042 [2024-07-12 00:56:10.681328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:06.042 2024/07/12 00:56:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:06.043 request: 00:37:06.043 { 00:37:06.043 "method": "bdev_nvme_attach_controller", 00:37:06.043 "params": { 00:37:06.043 "name": "nvme0", 00:37:06.043 "trtype": "tcp", 00:37:06.043 "traddr": "127.0.0.1", 00:37:06.043 "adrfam": "ipv4", 00:37:06.043 "trsvcid": "4420", 00:37:06.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.043 "prchk_reftag": false, 00:37:06.043 "prchk_guard": false, 00:37:06.043 "hdgst": false, 00:37:06.043 "ddgst": false, 00:37:06.043 "psk": ":spdk-test:key1" 00:37:06.043 } 00:37:06.043 } 00:37:06.043 Got JSON-RPC error response 00:37:06.043 GoRPCClient: error on JSON-RPC call 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@33 -- # sn=694813520 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 694813520 00:37:06.043 1 links removed 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@33 -- # sn=906429400 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 906429400 00:37:06.043 1 links removed 00:37:06.043 00:56:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112378 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 112378 ']' 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 112378 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112378 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:06.043 killing process with pid 112378 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112378' 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 112378 00:37:06.043 Received shutdown signal, test time was about 1.000000 seconds 00:37:06.043 00:37:06.043 Latency(us) 00:37:06.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.043 =================================================================================================================== 00:37:06.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.043 00:56:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 112378 00:37:06.977 00:56:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112342 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 112342 ']' 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 112342 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112342 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:06.977 killing process with pid 112342 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112342' 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 112342 00:37:06.977 00:56:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 112342 00:37:09.567 00:37:09.567 real 0m10.164s 00:37:09.567 user 0m17.575s 00:37:09.567 sys 0m1.933s 00:37:09.567 00:56:14 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:09.567 00:56:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 ************************************ 00:37:09.567 END TEST keyring_linux 00:37:09.567 ************************************ 00:37:09.567 00:56:14 -- common/autotest_common.sh@1142 -- # return 0 00:37:09.567 00:56:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:09.567 00:56:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:09.567 00:56:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:09.567 00:56:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:09.567 00:56:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:09.567 00:56:14 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:09.567 00:56:14 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:09.567 00:56:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:09.567 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 00:56:14 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:09.567 00:56:14 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:09.567 00:56:14 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:09.567 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:37:10.940 INFO: APP EXITING 00:37:10.940 INFO: killing all VMs 00:37:10.940 INFO: killing vhost app 00:37:10.940 INFO: EXIT DONE 00:37:11.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:11.874 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:11.874 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:12.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:12.442 Cleaning 00:37:12.442 Removing: /var/run/dpdk/spdk0/config 00:37:12.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:12.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:12.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:12.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:12.442 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:12.442 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:12.442 Removing: /var/run/dpdk/spdk1/config 00:37:12.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:12.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:12.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:12.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:12.442 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:12.442 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:12.442 Removing: /var/run/dpdk/spdk2/config 00:37:12.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:12.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:12.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:12.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:12.442 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:12.442 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:12.442 Removing: /var/run/dpdk/spdk3/config 00:37:12.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:12.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:12.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:12.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:12.442 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:12.442 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:12.442 Removing: /var/run/dpdk/spdk4/config 00:37:12.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:12.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:12.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:12.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:12.442 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:12.442 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:12.442 Removing: /dev/shm/nvmf_trace.0 00:37:12.442 Removing: /dev/shm/spdk_tgt_trace.pid61407 00:37:12.442 Removing: /var/run/dpdk/spdk0 00:37:12.442 Removing: /var/run/dpdk/spdk1 00:37:12.442 Removing: /var/run/dpdk/spdk2 00:37:12.442 Removing: /var/run/dpdk/spdk3 00:37:12.442 Removing: /var/run/dpdk/spdk4 00:37:12.442 Removing: /var/run/dpdk/spdk_pid100326 00:37:12.700 Removing: /var/run/dpdk/spdk_pid101696 00:37:12.700 Removing: /var/run/dpdk/spdk_pid102315 00:37:12.700 Removing: /var/run/dpdk/spdk_pid102318 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104267 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104370 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104467 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104584 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104770 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104862 00:37:12.700 Removing: /var/run/dpdk/spdk_pid104959 00:37:12.700 Removing: /var/run/dpdk/spdk_pid105057 00:37:12.700 Removing: /var/run/dpdk/spdk_pid105423 00:37:12.700 Removing: /var/run/dpdk/spdk_pid106121 00:37:12.700 Removing: /var/run/dpdk/spdk_pid107476 00:37:12.700 Removing: /var/run/dpdk/spdk_pid107684 00:37:12.700 Removing: /var/run/dpdk/spdk_pid107973 00:37:12.700 Removing: /var/run/dpdk/spdk_pid108286 00:37:12.700 Removing: /var/run/dpdk/spdk_pid108847 00:37:12.700 Removing: /var/run/dpdk/spdk_pid108857 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109238 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109403 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109559 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109655 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109814 00:37:12.700 Removing: /var/run/dpdk/spdk_pid109923 00:37:12.700 Removing: /var/run/dpdk/spdk_pid110618 00:37:12.700 Removing: /var/run/dpdk/spdk_pid110656 00:37:12.700 Removing: /var/run/dpdk/spdk_pid110687 00:37:12.700 Removing: /var/run/dpdk/spdk_pid111148 00:37:12.700 Removing: /var/run/dpdk/spdk_pid111184 00:37:12.700 Removing: /var/run/dpdk/spdk_pid111215 00:37:12.700 Removing: /var/run/dpdk/spdk_pid111635 00:37:12.700 Removing: /var/run/dpdk/spdk_pid111670 00:37:12.700 Removing: /var/run/dpdk/spdk_pid112153 00:37:12.700 Removing: /var/run/dpdk/spdk_pid112342 00:37:12.700 Removing: /var/run/dpdk/spdk_pid112378 00:37:12.700 Removing: /var/run/dpdk/spdk_pid61180 00:37:12.700 Removing: /var/run/dpdk/spdk_pid61407 00:37:12.700 Removing: /var/run/dpdk/spdk_pid61692 00:37:12.700 Removing: /var/run/dpdk/spdk_pid61808 00:37:12.700 Removing: /var/run/dpdk/spdk_pid61876 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62009 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62045 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62193 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62486 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62688 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62798 00:37:12.700 Removing: /var/run/dpdk/spdk_pid62924 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63042 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63087 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63129 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63196 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63321 00:37:12.700 Removing: /var/run/dpdk/spdk_pid63974 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64067 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64159 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64192 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64346 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64374 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64533 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64572 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64642 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64683 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64753 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64783 00:37:12.700 Removing: /var/run/dpdk/spdk_pid64987 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65029 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65110 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65209 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65240 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65318 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65365 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65411 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65458 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65510 00:37:12.700 Removing: /var/run/dpdk/spdk_pid65551 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65603 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65644 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65696 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65743 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65789 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65836 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65888 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65929 00:37:12.701 Removing: /var/run/dpdk/spdk_pid65981 00:37:12.701 Removing: /var/run/dpdk/spdk_pid66022 00:37:12.701 Removing: /var/run/dpdk/spdk_pid66074 00:37:12.701 Removing: /var/run/dpdk/spdk_pid66118 00:37:12.701 Removing: /var/run/dpdk/spdk_pid66173 00:37:12.958 Removing: /var/run/dpdk/spdk_pid66220 00:37:12.958 Removing: /var/run/dpdk/spdk_pid66267 00:37:12.958 Removing: /var/run/dpdk/spdk_pid66349 00:37:12.958 Removing: /var/run/dpdk/spdk_pid66483 00:37:12.958 Removing: /var/run/dpdk/spdk_pid66927 00:37:12.958 Removing: /var/run/dpdk/spdk_pid73814 00:37:12.958 Removing: /var/run/dpdk/spdk_pid74181 00:37:12.958 Removing: /var/run/dpdk/spdk_pid76781 00:37:12.958 Removing: /var/run/dpdk/spdk_pid77176 00:37:12.958 Removing: /var/run/dpdk/spdk_pid77451 00:37:12.958 Removing: /var/run/dpdk/spdk_pid77499 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78132 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78553 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78566 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78621 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78680 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78746 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78785 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78795 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78826 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78867 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78870 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78935 00:37:12.958 Removing: /var/run/dpdk/spdk_pid78993 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79056 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79095 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79109 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79136 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79464 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79638 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79913 00:37:12.958 Removing: /var/run/dpdk/spdk_pid79963 00:37:12.958 Removing: /var/run/dpdk/spdk_pid80358 00:37:12.958 Removing: /var/run/dpdk/spdk_pid80911 00:37:12.958 Removing: /var/run/dpdk/spdk_pid81365 00:37:12.958 Removing: /var/run/dpdk/spdk_pid82397 00:37:12.958 Removing: /var/run/dpdk/spdk_pid83404 00:37:12.958 Removing: /var/run/dpdk/spdk_pid83533 00:37:12.958 Removing: /var/run/dpdk/spdk_pid83614 00:37:12.958 Removing: /var/run/dpdk/spdk_pid85131 00:37:12.958 Removing: /var/run/dpdk/spdk_pid85407 00:37:12.958 Removing: /var/run/dpdk/spdk_pid90754 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91225 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91335 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91489 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91550 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91598 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91657 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91845 00:37:12.958 Removing: /var/run/dpdk/spdk_pid91999 00:37:12.958 Removing: /var/run/dpdk/spdk_pid92302 00:37:12.958 Removing: /var/run/dpdk/spdk_pid92448 00:37:12.958 Removing: /var/run/dpdk/spdk_pid92715 00:37:12.958 Removing: /var/run/dpdk/spdk_pid92866 00:37:12.958 Removing: /var/run/dpdk/spdk_pid93025 00:37:12.958 Removing: /var/run/dpdk/spdk_pid93389 00:37:12.958 Removing: /var/run/dpdk/spdk_pid93793 00:37:12.958 Removing: /var/run/dpdk/spdk_pid93807 00:37:12.958 Removing: /var/run/dpdk/spdk_pid96106 00:37:12.958 Removing: /var/run/dpdk/spdk_pid96436 00:37:12.958 Removing: /var/run/dpdk/spdk_pid96959 00:37:12.958 Removing: /var/run/dpdk/spdk_pid96962 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97318 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97339 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97354 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97387 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97403 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97550 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97557 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97657 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97670 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97770 00:37:12.958 Removing: /var/run/dpdk/spdk_pid97777 00:37:12.958 Removing: /var/run/dpdk/spdk_pid98255 00:37:12.958 Removing: /var/run/dpdk/spdk_pid98291 00:37:12.958 Removing: /var/run/dpdk/spdk_pid98440 00:37:12.958 Removing: /var/run/dpdk/spdk_pid98555 00:37:12.958 Removing: /var/run/dpdk/spdk_pid98961 00:37:12.958 Removing: /var/run/dpdk/spdk_pid99217 00:37:12.958 Removing: /var/run/dpdk/spdk_pid99729 00:37:12.958 Clean 00:37:13.215 00:56:17 -- common/autotest_common.sh@1451 -- # return 0 00:37:13.215 00:56:17 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:13.215 00:56:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.215 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:37:13.215 00:56:17 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:13.215 00:56:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.215 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:37:13.215 00:56:18 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:13.215 00:56:18 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:13.215 00:56:18 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:13.215 00:56:18 -- spdk/autotest.sh@391 -- # hash lcov 00:37:13.215 00:56:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:13.215 00:56:18 -- spdk/autotest.sh@393 -- # hostname 00:37:13.215 00:56:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:13.473 geninfo: WARNING: invalid characters removed from testname! 00:37:45.537 00:56:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:47.479 00:56:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:50.002 00:56:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:53.283 00:56:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:56.607 00:57:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:59.136 00:57:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:02.415 00:57:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:02.415 00:57:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:02.415 00:57:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:02.415 00:57:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.415 00:57:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.415 00:57:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.415 00:57:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.415 00:57:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.415 00:57:06 -- paths/export.sh@5 -- $ export PATH 00:38:02.415 00:57:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.415 00:57:06 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:02.415 00:57:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:02.415 00:57:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720745826.XXXXXX 00:38:02.415 00:57:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720745826.vSWoj6 00:38:02.415 00:57:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:02.415 00:57:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:38:02.415 00:57:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:02.415 00:57:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:02.415 00:57:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:02.415 00:57:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:02.415 00:57:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:02.415 00:57:06 -- common/autotest_common.sh@10 -- $ set +x 00:38:02.415 00:57:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:38:02.415 00:57:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:02.415 00:57:06 -- pm/common@17 -- $ local monitor 00:38:02.415 00:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.415 00:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.415 00:57:06 -- pm/common@25 -- $ sleep 1 00:38:02.415 00:57:06 -- pm/common@21 -- $ date +%s 00:38:02.415 00:57:06 -- pm/common@21 -- $ date +%s 00:38:02.415 00:57:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720745826 00:38:02.415 00:57:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720745826 00:38:02.415 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720745826_collect-vmstat.pm.log 00:38:02.415 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720745826_collect-cpu-load.pm.log 00:38:03.349 00:57:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:03.349 00:57:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:38:03.349 00:57:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:03.349 00:57:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:03.349 00:57:07 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:03.349 00:57:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:03.349 00:57:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:03.349 00:57:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:03.349 00:57:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:03.349 00:57:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:03.349 00:57:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:03.349 00:57:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:03.349 00:57:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:03.349 00:57:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:03.349 00:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.349 00:57:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:38:03.349 00:57:08 -- pm/common@44 -- $ pid=114133 00:38:03.349 00:57:08 -- pm/common@50 -- $ kill -TERM 114133 00:38:03.349 00:57:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.349 00:57:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:38:03.349 00:57:08 -- pm/common@44 -- $ pid=114134 00:38:03.349 00:57:08 -- pm/common@50 -- $ kill -TERM 114134 00:38:03.349 + [[ -n 5168 ]] 00:38:03.349 + sudo kill 5168 00:38:04.306 [Pipeline] } 00:38:04.328 [Pipeline] // timeout 00:38:04.332 [Pipeline] } 00:38:04.342 [Pipeline] // stage 00:38:04.345 [Pipeline] } 00:38:04.355 [Pipeline] // catchError 00:38:04.360 [Pipeline] stage 00:38:04.362 [Pipeline] { (Stop VM) 00:38:04.370 [Pipeline] sh 00:38:04.642 + vagrant halt 00:38:08.828 ==> default: Halting domain... 00:38:14.118 [Pipeline] sh 00:38:14.393 + vagrant destroy -f 00:38:18.610 ==> default: Removing domain... 00:38:18.623 [Pipeline] sh 00:38:18.922 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:38:18.930 [Pipeline] } 00:38:18.942 [Pipeline] // stage 00:38:18.945 [Pipeline] } 00:38:18.956 [Pipeline] // dir 00:38:18.961 [Pipeline] } 00:38:18.974 [Pipeline] // wrap 00:38:18.980 [Pipeline] } 00:38:18.992 [Pipeline] // catchError 00:38:19.000 [Pipeline] stage 00:38:19.002 [Pipeline] { (Epilogue) 00:38:19.015 [Pipeline] sh 00:38:19.288 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:27.398 [Pipeline] catchError 00:38:27.400 [Pipeline] { 00:38:27.416 [Pipeline] sh 00:38:27.696 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:27.974 Artifacts sizes are good 00:38:27.987 [Pipeline] } 00:38:28.004 [Pipeline] // catchError 00:38:28.017 [Pipeline] archiveArtifacts 00:38:28.023 Archiving artifacts 00:38:28.191 [Pipeline] cleanWs 00:38:28.208 [WS-CLEANUP] Deleting project workspace... 00:38:28.208 [WS-CLEANUP] Deferred wipeout is used... 00:38:28.214 [WS-CLEANUP] done 00:38:28.216 [Pipeline] } 00:38:28.243 [Pipeline] // stage 00:38:28.252 [Pipeline] } 00:38:28.284 [Pipeline] // node 00:38:28.292 [Pipeline] End of Pipeline 00:38:28.324 Finished: SUCCESS